Are funding options for AI Safety threatened? W45

The crypto giant FTX crashes, introducing massive uncertainty in the funding space for AI safety, humans cooperate better with lying AI, and interpretability is promising but also not.
The crypto giant FTX crashes, introducing massive uncertainty in the funding space for AI safety, humans cooperate better with lying AI, and interpretability is promising but also not.

Content
00:25 Uncertainty for AI Safety Funding
1:20 Human-AI cooperation
2:30 Interpretability in the wild 
3:25 Other news


Opportunities

Sources:
FTX fallout continues to roll out markets, https://www.youtube.com/watch?v=IgpbdnXOpEk 
Human-AI cooperation is better when non-calibrated confidence, https://arxiv.org/pdf/2202.05983.pdf 
Interpretability in the wild,
https://arxiv.org/pdf/2211.00593.pdf 
Eric Drexler discusses superintelligences with Eliezer (in the comments), https://www.alignmentforum.org/posts/HByDKLLdaWEcA2QQD/applying-superintelligence-without-collusion
Janus (an alias for several people) shows which places GPT-3 davinci-text-002 selects very specific outputs - favorite number, no answers, etc, https://www.alignmentforum.org/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse-due-to-rlhf 
Interpretability starter resources, https://ais.pub/alignmentjam
Are funding options for AI Safety threatened? W45
Broadcast by