Consider donating to the Malaria Consortium, or the Against Malaria Foundation.
Dustin Moskovitz claims "Tesla has committed consumer fraud on a massive scale", and "people are going to jail at the end"
https://www.threads.net/@moskov/post/C6KW_Odvky0/
Not super EA relevant, but I guess relevant inasmuch as Moskovitz funds us and Musk has in the past too. I think if this were just some random commentator I wouldn't take it seriously at all, but a bit more inclined to believe Dustin will take some concrete action. Not sure I've read everything he's said about it, I'm not used to how Threads works
Crosspost of my blog.
You shouldn’t eat animals in normal circumstances. That much is, in my view, quite thoroughly obvious. Animals undergo cruel, hellish conditions that we’d confidently describe as torture if they were inflicted on a human (or even a dog). No hamburger...
Any help is appreciated: I am looking (so far unsuccessfully) for a number that nicely illustrates that people prefer their donations to help locally rather than internationally.
P.S.: One number I found was TLYCS's claim that only 10% of individual donations go to...
For UK, data from CAF.
"The proportion of donations going to overseas aid and disaster relief (7% -£931m) halved from a high in 2022 (14%)"
I recently interviewed Sofya Lebedeva (currently pursuing a PhD in Clinical Medicine at Oxford University & co-founder of ARMOR - Alliance for Reducing Microbial Resistance).
We discussed:
▹ How she has forged a career path in biosecurity
▹ Skills and mindsets essential for success
▹ Her thoughts on PhDs and entrepreneurship
... and more!
This was such an energising, inspiring conversation! I'd love to conduct more career interviews with ambitious, mission-driven women working on mitigating risks from emerging technologies (like AI, biotech, WMDs) — so if you would like to be interviewed or have ideas for who I should talk to next, please let me know! 🙏
As you may have noticed, 80k After Hours has been releasing a new show where I and some other 80k staff sit down with a guest for a very free form, informal, video(!) discussion that sometimes touches on topical themes around EA and sometimes… strays a bit further afield...
I don't think my comment is likely to be all that useful, but putting it here anyway.
I personally find it difficult to pay attention to podcasts with more than 2 people. I tried to listen to the first episode for about 30 minutes and this one for about 5 minutes, and I couldn't comfortably follow them while paying attention to other tasks (walking around, cleaning, cooking etc.).
I think it's likely that more diversity in the space is good though, as many of the most popular podcasts I see on e.g. Youtube tend to be more than two people. I suspe...
Consider donating all or most of your Mana on Manifold to charity before May 1.
Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 ...
Forum post saying the same thing, with some discussion: https://forum.effectivealtruism.org/posts/SM3YzTsXmQ6BaFcsL/you-probably-want-to-donate-any-manifold-currency-this-week
Malaria is massive. Our World in Data writes: “Over half a million people died from the disease each year in the 2010s. Most were children, and the disease is one of the leading causes of child mortality.” Or, as Rob Mather, CEO of the Against Malaria Foundation (AMF) phrases...
These questions came up as I read for this post, and I'd love to hear answers from more knowledgeable people:
In this "quick take", I want to summarize some my idiosyncratic views on AI risk.
My goal here is to list just a few ideas that cause me to approach the subject differently from how I perceive most other EAs view the topic. These ideas largely push me in the direction...
I want to say thank you for holding the pole of these perspectives and keeping them in the dialogue. I think that they are important and it's underappreciated in EA circles how plausible they are.
(I definitely don't agree with everything you have here, but typically my view is somewhere between what you've expressed and what is commonly expressed in x-risk focused spaces. Often also I'm drawn to say "yeah, but ..." -- e.g. I agree that a treacherous turn is not so likely at global scale, but I don't think it's completely out of the question, and given that I think it's worth serious attention safeguarding against.)
The obvious example would be synthetic biology, gain-of-function research, and similar.
Can you explain why you suspect these things should be more regulated than they currently are?
In particular, I am persuaded by the argument that, because evaluation is usually easier than generation, it should be feasible to accurately evaluate whether a slightly-smarter-than-human AI is taking unethical actions, allowing us to shape its rewards during training accordingly. After we've aligned a model that's merely slightly smarter than humans, we can use it to help us align even smarter AIs, and so on, plausibly implying that alignment will scale to indefinitely higher levels of intelligence, without necessarily breaking down at any physically realistic point.
This reasoning seems to imply that you could use GPT-2 to oversee GPT-4 by boostrapping from a chain of models of scales between GPT-2 and GPT-4. However, this isn't true, the weak-to-strong generalization paper finds that this doesn't work and indeed bootstrapping like this doesn't help at all for ChatGPT reward modeling (it helps on chess puzzles and for nothing else they investigate I believe).
I think this sort of bootstrapping argument might work if we could ensure that the each model in the chain was sufficiently aligned and capable of reasoning that it would carefully reason about what humans would want if they were more knowledgeable and then rate outputs based on this. However, I don't think GPT-4 is either aligned enough or capable enough that we see this behavior. And I still think it's unlikely it works under these generous assumptions (though I won't argue for this here).
Given how bird flu is progressing (spread in many cows, virologists believing rumors that humans are getting infected but no human-to-human spread yet), this would be a good time to start a protest movement for biosafety/against factory farming in the US.