As argued in the main text, the human race is the causer of most of the suffering, and there is absolutely no chance that humanity will ever end all the suffering on earth. But not only that the human race will not end all the suffering on earth, there is a probability that it would multiply suffering by shipping sentient life to other planets, and/or by creating non-biological sentient beings.
Despite that arguments regarding astronomical suffering such as the above will serve my argument for human extinction as soon as possible, they are not my main reasons (and therefore weren’t included in the main text) because I am not sure how likely these scenarios are to ever happen. However, they are more likely than some would prefer to believe, especially the option of shipping life to other celestial bodies such as planets and the moons of other planets, which is already vastly discussed by scientists and futurists who wish to avoid the “risk” of human extinction, and so are looking for other potential habitats. Since none of the celestial bodies in earth’s solar system are habitable, the most realistic option for finding an alternative to earth is called terraforming which is the process of modifying and altering a planet’s or a moon’s surface and atmosphere as to enable life on that planet, starting with chemical manipulations such as releasing stored carbon dioxide into the atmosphere of a planet to make it warmer (one of the things required in the case of terraforming Mars which is a very cold planet with a very thin atmosphere, but with frozen carbon dioxide stored in its poles), or removing carbon dioxide from the atmosphere of a planet to make it colder (one of the things required in the case of terraforming Venus).
If done successfully, and of course, currently, it is a very big IF, the next phase may probably be shipping microorganisms that would consume carbon dioxide and release oxygen to convert the atmosphere to one which is more similar to earth’s. Next would be plants, and when possible animals, until that planet becomes human habitable.
Another method is called directed panspermia which is basically spreading microbial seeds of life into space, hoping that they would populate other planets, and in time turn them into habitable for humans as well.
Terraforming is horrendous not only because it might enable the human race to expend to other planets, and therefore multiply the suffering caused by humans as well as practically demolish the chances to make them extinct, but also because even if the project would fail in terms of making another planet human habitable, as long as other sentient beings would survive there, it would be extremely terrible.
Terraforming may sound like no more than science fiction to some of you, but terraforming Mars (as opposed to Venus which I mentioned merely for the example but is actually not a very likely candidate to be even considered for terraforming) is already a popular concept in this field. That is mainly since Mars also rotates approximately once every 24 hours, it also has polar icecaps, and it once had running water. However, the differences are more crucial than the similarities. Mars’s atmosphere is only about 1% of earth’s, and despite that it is mostly CO2, it is not thick enough for a greenhouse effect. One of the ideas for making it much warmer by initiating a greenhouse effect is to release the frozen CO2 stored in its icecaps by nuking it. Another idea is to place reflectors in the sky so the sun would warm Mars enough to dissolve its icecaps as well as some of the permafrost and so release a sufficient amount of CO2 to initiate a greenhouse effect, a chain reaction that would turn Mars into a warmer planet. Then, as mentioned earlier, the idea is to ship microorganisms that would absorb CO2 and release oxygen. One of the ideas after this stage, is to ship termites who have the same bacteria in their intestines as cows and so are responsible for a recognizable amount of methane emission on earth, so they would emit methane and enhance the greenhouse effect until it is worm enough for humans to feel comfortable.
Some suggest an extremely horrifying prediction that if Mars is successfully terraformed, it can turn into an animal habitat in the scale of earth.
There is no reason to believe that things would be better on another planet. Humanity would probably export the horrors it is doing on earth, to other celestial bodies, including wars, hunger, exploitation of nonhumans and humans, misogynism, racism and etc. And just like on earth, the tiny minority of people who care, would be practically almost helpless against governmental authorities which are expected to be even more corporate dependent than they are on earth, since moving to and establishing on another planet would require huge financial resources which is expected to come from corporations and private investors which would seek to gain profits from their investment. Very soon, in fact even during the terraforming process, just like it is on earth, wealth would become the ruling factor. The interests of the investors would be more important than anything else, and as always, at the expense of everyone else.
Considering humanity’s reproduction rate and global warming which would allow humans to maintain life as we currently know them only in very limited areas (certainly not in areas which are already arid and in areas at sea level), and space colonization is a very realistic option in the future. Currently it seems that there are no relevant candidates for human colonizing but that might change in the future, among other things as a result of terraforming.
This is not a very worrying scenario now but it is not at all implausible. In fact, many people and organizations are already addressing it very seriously nowadays in spite that it still seems quite far ahead.
Having said that, I don’t think it is plausible to seriously base such a serious argument as focusing on human extinction even if it means that other species won’t go extinct, merely on the option that humans one day might export sentience to other planets, or that one day they might develop sentient non-biological machines (an issue I’ll address further in the text). Like the case of claiming that there is more human-caused suffering than suffering in nature, this is not part of my main argument which is solely the absolute impossibility that humans would ever agree to extinct every sentient being on earth and then themselves, and therefore that we better at least end human-caused suffering and as soon as possible. This is my argument and the others are merely appendages. However, despite these claims being too speculative by themselves to serve as an argument for focusing on human extinction even if it means that other species won’t go extinct, they are realistic enough to at least play some part in the reasons for human extinction as soon as possible. If it is quite probable that in the future humans would multiply suffering by developing sentient non-biological machines, or export sentience to other planets, then obviously the ethical incentive for human extinction before they are able to do so, is multiplied.
The claim that we mustn’t advocate for human extinction before every other sentient species goes extinct because someday the human race might totally change for the better, is taking a huge risk that all the suffering that it causes would remain if not multiply, as technology is a double edged sword. And the odds are far from being equal. Counting on humanity to use technology to help others is not based on any historical precedent, but on the other hand, predicating that humanity would use technology to systematically intensify suffering, is based on everything that happened all along the history of the human race.
I am not familiar enough with the advancement in the area of AGI and ASI (Artificial General Intelligence and Artificial Super Intelligence) to make a serious claim regarding how likely it is that artificially intelligent non-biological creatures would one day become sentient, or how likely it is that they would take over the planet, but instead of ending life on earth they would use the biological creatures for their purposes, or in other ways cause more suffering.
However, if we accept the functional or computational theory of mind, meaning that a mind exists in virtue of its function, and not, and maybe even regardless of its matter, then there is no reason for biology’s monopoly on mind-making, often referred to as “carbon chauvinism”. According to the functional theory of mind, thoughts and feelings are a product of certain chemicals and certain neurons in certain positions, so basically a quantum computer with enough computational power can theoretically replicate the human brain on a digital scale by simulating the chemicals, neurons and their interactions that represent thoughts and feelings.
And given that digital minds are much faster (signal transmission with computer hardware is millions of times faster than in biological brains) and much less restricted in terms of size (they can be as large as a building) their potential is far greater than a biological one. And once artificial intelligence machines reach a certain level of deep learning, they can be ordered to further improve themselves and other artificial intelligence machines to the point of being far better at it than humans could ever be, and with very little limitations.
Just about a decade ago the idea that we must consider the potential danger of artificial intelligence was probably mocked wall-to-wall, but today it is starting to be seriously discussed. So, a decade from now, not to mention when this technology would significantly advance, these concerns would seem much less like science fiction and more like a very serious problem.
Probably part of the reason that the potential danger of artificial intelligence was and still is (but much less) mocked, is because it is usually presented as a robot takeover. Obviously that scenario is a much better script for hollywoodian films than an artificial intelligence performing a task with negative outcomes which humans can’t stop since these machines are designed for constant optimization and continuation of their task until they are accomplished, but that is a much more probable option than the extremely unlikely scenario of Matrix and The Terminator. Most people who are concerned about the potential danger of artificial intelligence are afraid of unintentional disasters consequent of machines thinking of ways to accomplish tasks humans have given to them with undesirable consequences that humans didn’t consider as optional, or because some tasks would be interpreted the wrong way, or would be found not clear enough by the machines, or would have unexpected consequences which humans would find hard to alter.
Other potential dangers often brought up are a dramatic alteration of human society and economy causing a mass unemployment, depression and meaninglessness, and even to the point of a dystopian future of an extremely stratified human society where most humans are bored, disinterested, useless and easily entertained and manipulated by a tiny group of humans who control the artificial intelligent machines.
Despite being a very dire scenario in itself, and therefore another reason to sterilize the human race as soon as possible so to prevent it, it is also another reason to sterilize the human race as soon as possible since it would be even more highly unlikely than it already is that humans would decide to devote their lives to ending the suffering of animals in nature while living in a society which is even worse than the horrible society they are currently living in.
Another very major concern is that as opposed to the scenario of an armed rebellion of artificial intelligence robots against humans which is extremely unlikely, using artificial intelligence in wars between humans is very likely and is very likely to make wars even more destructive, which would even further intensify the suffering humans are causing.
Regarding the option of non-biological sentient and conscious creatures, researchers in Yale University are building a robot named Nico that they hope will be able to recognize itself in a mirror. If it’ll work it might mean that when looking at its own reflection, this robot will be able to understand that this figure is in fact its own. That doesn’t mean that this robot is sentient or would ever be, or that any other robot would ever be sentient, but it might be a step in that direction.
Another example is of a company called Affectiva which developed “Emotion AI”, machines which uses face recognition technology and deep learning to read the emotional reaction of people. Affectiva and others are working to help machines actually understand humans on a more intimate level, basically giving them a level of emotional intelligence.
Humanity wouldn’t hesitate to keep producing and further developing machines even if and once they would realize that the later are sentient, as long as it would be beneficial for humans. Humans knew that nonhuman animals can suffer and not only did they keep “producing” them, they further developed their exploitation to the monstrous levels we see today (which is still further developed to even more monstrous levels all the time). And nonhuman animals, as opposed to non-biological sentient creatures, scream and squirm and look and act very much like human animals when they suffer. That is not very likely in the case of artificial sentient creatures who might suffer without humans even knowing that they are suffering. And since humans are finding it hard to empathize even with nonhuman animals despite them being so similar to humans, how likely is it that humans would empathize with machines that look nothing like them (these machines are not very likely to look like they appear in popular movies but probably more like big computers)?
And even if most of humanity would somehow as opposed to its horrendous history become responsible and caring about the suffering of others, all it takes is for one country or even one private company to create these kinds of machines, or colonize space, for much more suffering to be caused.
Despite everything said in this appendix about the option of exporting sentience to other planets, and about the option of creating sentient non-biological creatures, both options are currently too speculative to function as an integral part of my main argument specified in the main text. However, I am taking under consideration that even if the probability of space colonization or of sentient non-biological machines is small, the risk is astronomical. And anyone who thinks that one or both of these options is quite realistic, most certainly must take them under very serious consideration when speculating between human extinction as soon as possible, or alternatively, waiting for humans to absolutely change and as opposed to every moment in their history become benevolent rational ethical beings who would dedicate their entire existence to ending all the suffering on earth. Waiting for that unicorn might end up with more or less the same horrible earth as the current one plus sentient non-biological creatures, or in the even worse case, with another kind of earth.
And one last point, speaking of expanding life in space, not only that currently EFILism is barley known, and most of the ones who have heard of it firmly and fiercely reject it, there are more people on the exact opposite side. And by that I don’t mean regular people who don’t give a fuck about anything but themselves, let alone about suffering of animals in nature, and I don’t even mean pro-natalists, but people who literally hold opposing ethical views to EFILism, people who define their moral view as Life-Centered Ethics.
One example of this kind of people is Michael n. Mautner who wrote an article called Life-Centered Ethics, And The Human Future In Space and who runs a movement advocating for expanding life in space.
The following are some selected samples of this world view:
“Life-centered ethics are based on the unique value of life in Nature; our unity with all fellow biological life; and our power to expand life in space.”
“Plant Life where it is absent. Seed with life all space and time, and encompass in Life all the resources of the universe.”
“Our descendants will fill all habitable space and time and seek to extend life to eternity. When life permeates the universe, our human existence will have fulfilled a cosmic purpose.”
Although I don’t think that these kinds of views necessarily reflect the majority of people, I do think that they are way closer to people’s views than EFILism. And the fact that these senseless, groundless, illogical, irrational, incoherent and unethical ideas would probably be more appealing to most people than the highly sensible, grounded, logical, rational, coherent and ethical ideas behind EFILism, goes to show how extremely unlikely is the option of humanity someday somehow reaching the ethical and logical conclusion that it is better to end all sentient lives on earth, and immediately after that exterminate itself. Therefore we must focus on the much more likely option of human extinction by forced sterilization, first and foremost, and as soon as possible.
References
Althaus D. & Gloor, L. Reducing Risks of Astronomical Suffering: A Neglected Priority. 2016
https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-priority
Baumann T. Should altruists prioritize the far future? 2017
http://prioritizationresearch.com/should-altruists-prioritize-the-far-future
Baumann T. (2017b). Should altruists focus on artificial intelligence? 2017
http://prioritizationresearch.com/should-altruists-focus-on-artificial-intelligence
Baumann T. Thoughts on longtermism. 2019
http://s-risks.org/thoughts-on-longtermism
Baumann T. S-risks: An introduction. 2017
http://s-risks.org/intro
Baum S. “A Survey of Artificial General Intelligence Projects for Ethics, Risk and Policy.” Global Catastrophic Risk Institute Working Paper 17-1 2017 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741
Bostrom N. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford UP, 2014
Chalmers, D. “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17 (9–10): 7–65. 2010
Daniel M. S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017).
https://foundational-research.org/s-risks-talk-eag-boston-2017
Deudney D. Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity. Oxford: Oxford University Press 2020
Gloor L. Altruists Should Prioritize Artificial Intelligence. 2016
https://foundational-research.org/altruists-should-prioritize-artificial-intelligence
Gloor L. Cause prioritization for downside-focused value systems. 2018
https://foundational-research.org/cause-prioritization-downside-focused-value-systems
Gloor L. & Mannino, A. The Case for Suffering-Focused Ethics. 2016
https://foundational-research.org/the-case-for-suffering-focused-ethics
Good, I. J. “Speculations Concerning the First Ultraintelligent Machine.” Advances in Computers 6: 31–88. 1966
Häggström O. “Challenges to the Omohundro-Bostrom Framework for AI Motivations.” Foresight. doi:10.1108/FS04-2018-0039 2018
Horta O. Debunking the Idyllic View of Natural Processes: Population Dynamics and Suffering in the Wild. Télos. 17, pp. 73-88. 2010
https://masalladelaespecie.files.wordpress.com/2012/05/debunkingidyllicvi ewhorta.pdf
Horta O. Animal Suffering in Nature: The Case for Intervention. Environmental Ethics, 39(3), pp. 261-279. 2017
Inmendham. Best Work. YouTube playlist by graytaich0. 2011-2015
https://www.youtube.com/watch?v=b1mJnEmjlLE&list=PLcmZ9oxph4sxzDfr2oH6tpNijYUH5dy3
Knutsson S. How Could an Empty World Be Better than a Populated One? 2016
https://foundational-research.org/how-could-an-empty-world-be-better-than-a-populated
Revonsuo A. Binding and the Phenomenal Unity of Consciousness
Consciousness and Cognition 8, 173–185 1999
Ryder R. Speciesism, Painism and Happiness: A Morality for the 21st Century. Exeter, UK: Andrews UK Ltd. 2011
Tomasik B. (2016). Will Space Colonization Multiply Wild-Animal Suffering? Foundational Research Institute. http://reducing-suffering.org/will-space-colonization-multiply-wild-animalsuffering
Tomasik B. (2017a). Risks of Astronomical Future Suffering. Foundational Research Institute. https://foundational-research.org/risks-of-astronomical-future-suffering
Tomasik B. (2017b). Gains from Trade through Compromise. Foundational Research Institute. https://foundational-research.org/gains-from-trade-through-compromise
Tomasik B. (2017c). Omelas and Space Colonization. Foundational Research Institute.
http://reducing-suffering.org/omelas-and-space-colonization
Tomasik B. (2015c). Should Altruists Focus on Reducing Short-Term or Far-Future Suffering?
http://reducing-suffering.org/altruists-focus-reducing-short-term-far-future-suffering
Tomasik B. The Importance of Wild-Animal Suffering. Foundational-research.org. Available from: http://foundational-research.org/the-importance-of-wild-animal-suffering
Tomasik B. Lab Universes: Creating Infinite Suffering. 2006
https://reducing-suffering.org/lab-universes-creating-infinite-suffering
Torres P. Why We Should Think Twice About Colonizing Space 2018
Torres P. The possibility and risks of artificial general intelligence, Bulletin of the Atomic Scientists, DOI: 10.1080/00963402.2019.1604873 2019
https://doi.org/10.1080/00963402.2019.1604873
Vinding M. Suffering, Infinity, and Universe Anti-Natalism. 2017
https://magnusvinding.com/2017/12/01/suffering-infinity-and-universe-anti-natalism
Vinding M. Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique. 2018
https://magnusvinding.com/2018/09/18/why-altruists-should-perhaps-not-prioritize-artificialintelligence-a-lengthy-critique
Vinding M. A Copernican Revolution in Ethics 2014
Vinding M. Anti-Natalism and the Future of Suffering: Why Negative Utilitarians Should not Aim for Extinction 2015
Vinding M. Suffering-Focused Ethics: Defense and Implications. Copenhagen: Ratio Ethica 2020
Vinding M. The Speciesism of Leaving Nature Alone, and the Theoretical Case for “Wildlife Anti-Natalism”. (2016a)
https://www.smashwords.com/books/view/624122
Ye A. How AI Could Become Sentient
Recent Comments