We host intensive in-person bootcamps to upskill participants in AI Safety. France, Switzerland, Germany, the UK and Brasil — with more on the way.ml4good.orgJoined May 2024
Applications open!
ML4Good Europe 2025 is focused on technical work for AI governance and AI safety. A free 10-day bootcamp, open to EU & Norway residents. Join us in March!
Apply by Sept 1!
ml4good.org/courses/ml4goo…
(Applying typically takes 30-45 minutes.)
I wrote a mech interp reading list two years ago, but a LOT has changed - announcing v2!
This is a highly opinionated list of favourite papers, not a lit review - I try to summarise key takeaways, critique, advise on which parts to prioritise vs skip, and explain WHY I like it!
I can strongly recommend applying if you‘re interested in technical AI safety (or not, but you‘re very good at ML and we can try to convince you to work on alignment).
ML4Good camps are awesome:
I can strongly recommend applying if you‘re interested in technical AI safety (or not, but you‘re very good at ML and we can try to convince you to work on alignment).
ML4Good camps are awesome:
Yoshua Bengio is looking for theory folk to join him to work on Bayesian approaches to AGI safety. I think this is a great opportunity: I've quite enjoyed the theory discussions I've had with Yoshua so far, and would love more work in this direction.
Applications for the ML4Good UK August 2024 bootcamp just opened!
Dates: Aug 31 - Sept 10
Application deadline: July 7
Interested in upskilling in AI safety, exploring the research landscape, and connecting with like-minded individuals? Consider applying!
Applications to ML4Good Germany 2024 are open!
Dates: Sept 23 - Oct 3
Application deadline: July 14
Interested in upskilling in AI safety, exploring the research landscape, and connecting with like-minded individuals? Consider applying!
ml4good.org/courses/ml4goo…
What a remarkable job of balancing accessibility and technical accuracy! If you're familiar with the content, it's very fun to watch. If you're not familiar, it's a very nice way to dip a toe in.
What a remarkable job of balancing accessibility and technical accuracy! If you're familiar with the content, it's very fun to watch. If you're not familiar, it's a very nice way to dip a toe in.
ML4Good is seeking expressions of interest from individuals interested in teaching assistant and organiser roles, as well as from organisations interested in partnering with us.
These roles play a key part in facilitating the expansion into new regions: ml4good.org/get-involved
686 Followers 4K FollowingIgbo Girl||Building: @VitalNutrient|| Tackling malnutrition w/SBC x AI 🤖 + Blockchain)|| Support Projects & Teams to use AI + Web3 + Quantum for Good. #AI4Good
1K Followers 657 FollowingMachine Learning Engineer || Experience in NLP, Deep Learning, Machine Learning, and Azure ML Studio || Microsoft Learn Student Ambassador (MLSA) Alumnus
257 Followers 1K FollowingPixel architect.
Graphics Programmer on @FrostbiteEngine.
Co-organizer of the Stockholm Center for AI Safety.
https://t.co/zaSmhzZh2l
197 Followers 4K FollowingIndependent AI Safety researcher, M. Tech x Summa Cum Laude @NITHamirpurHP. BASIS Fellow @UCBerkeley, RA @HarvardAISafety. Get Published or Die Trying.
20 Followers 161 FollowingI upskill in AI safety research @ML4GoodOrg and share my lessons at https://t.co/kcrs608XbG
Effective Altruist | Alumn @learnnontrivial, @tksworldhq
5K Followers 906 FollowingFaculty at @ELLISInst_Tue & @MPI_IS, leading the AI Safety and Alignment group. PhD from @EPFL supported by Google & OpenPhil PhD fellowships.
522 Followers 1K FollowingAdvisor @80000Hours /errors, opinions, shitakes 🍄 here are my own
💁🏾♂️🙋🏼♀️Apply! https://t.co/s8PBT1pUi8
🔸Help! https://t.co/8Gibe0FpMf
812 Followers 3K FollowingGlobalist rat practising live-fire bayesianism/prop trading. Previously politics/AI guy and prediction market editor-in-chief @ Manifold. DMs open for trading
54K Followers 1K Following"Far too nice to be a journalist": Terry Pratchett. Lead writer, Flagship. Semafor. chiversthomas(a)gmail. Third book, Everything is Predictable, out now!
156K Followers 36 FollowingI have a place where I say complicated things about philosophy and science. That place is my blog. This is where I make terrible puns.
6K Followers 272 FollowingComputer Science Professor at Northeastern, Ex-Googler. Believes AI should be transparent. @[email protected] @davidbau.bsky.social https://t.co/wmP5LV0pJ4
34K Followers 6K FollowingA mathematician/entrepreneur in social science. Tweets about psychology, society, rationality, tech, science, and philosophy. Founder of https://t.co/2YGraOwo77
110K Followers 6K FollowingSearching for the numinous
🇦🇺 🇨🇦, currently live in 🇺🇸
Research @AsteraInstitute
https://t.co/maezekzRUb
https://t.co/2dWwZKrvrn
2K Followers 2K FollowingNow: finetuning at @AnthropicAI. Before: MIT postdoc, UC Berkeley philosophy PhD. Built https://t.co/3PWzczTzu4. Views my own.
14K Followers 1K Followingsome people call me smca. technical non technical member of staff at @anthropicai. prev at stripe. also on https://t.co/iJhZrzrLxU. 🇮🇪
228K Followers 1 FollowingUpdates for developers building with the OpenAI Platform and API • Service status: https://t.co/kZwnwdYqOS • Support: https://t.co/qCi6M5ESZU
365 Followers 282 Following"You should put a comparable amount of effort into making them better and keeping them under control" (Professor Geoffrey Hinton on AI systems)
254 Followers 36 FollowingTAIS 2026 will bring together leading AI safety experts to discuss how to make AI safe, beneficial, and aligned with human values.
908 Followers 34 FollowingExplore proposals to positively shape advanced AI through our course, designed with AI safety experts at OpenAI and the University of Cambridge.
5K Followers 873 FollowingCommunity of volunteers who work together to mitigate the risks of AI. We want to internationally pause the development of superintelligent AI until it's safe.
5K Followers 2K FollowingResearch Scientist (Frontier Planning) at @GoogleDeepMind.
Research Affiliate @Cambridge_Uni @CSERCambridge & @LeverhulmeCFI.
All views my own.
No recent Favorites. New Favorites will appear here.