There's a variant of Goodhart's law that applies to human value systems. These people presumably started out opposing animal cruelty, and they adopted "lower meat usage" as a decent proxy measure. But over time they internalized the proxy, and their moral values changed from "animal cruelty is inherently bad" to "eating meat is inherently bad". Now the original goal is gone, and they happily advocate for policies that would increase harm to real animals just to reduce the consumption of plants that look too much like animals. This failure mode is all too common. * Environmentalists wanted to help the environment, so they adopted the simpler policy of "oppose fossil fuel companies". This goal worked fine at first, but eventually caused them to start opposing policies like carbon taxes and carbon sequestration that would help the environment but don't sufficiently hurt the fossil fuel companies. * Libertarians wanted to protect our freedom, and adopted "defend everyone's access to guns" as a method of accomplishing this. This was a great idea in a military environment where guns were an effective power-equalizer, being some of the best weapons available while still cheap enough that even commoners could afford them. But with the rise of digital surveillance and propaganda and vastly better physical weapons systems, guns have become irrelevant as a check on the government's power, plausibly lowering our overall freedom due to their impacts on crime and policing. Yet libertarians continue staunchly defending them above everything else, because to them libertarianism isn't about freedom anymore, it's now about the guns. * People wanted to prevent child abuse and human trafficking, so they adopted the simpler goal of banning child pornography in order to lower the financial incentive to abuse children. Then AI-generated pornography comes along, with the potential to dramatically reduce this suffering by removing that incentive completely. But people oppose this, because "prevent children from being raped" stopped being their goal, and "prevent pixels from being put in arrangements that look like children being raped" is more important to them. I don't know how to solve this problem. Human value systems are deeply flexible; this allows us to learn and grow, but also means we're prone to being corrupted. It's all too easy to start out on a well-intentioned quest, but be changed by the experience into something we would never have endorsed at the beginning.
There's a variant of Goodhart's law that applies to human value systems. These people presumably started out opposing animal cruelty, and they adopted "lower meat usage" as a decent proxy measure. But over time they internalized the proxy, and their moral values changed from "animal cruelty is inherently bad" to "eating meat is inherently bad". Now the original goal is gone, and they happily advocate for policies that would increase harm to real animals just to reduce the consumption of plants that look too much like animals. This failure mode is all too common. * Environmentalists wanted to help the environment, so they adopted the simpler policy of "oppose fossil fuel companies". This goal worked fine at first, but eventually caused them to start opposing policies like carbon taxes and carbon sequestration that would help the environment but don't sufficiently hurt the fossil fuel companies. * Libertarians wanted to protect our freedom, and adopted "defend everyone's access to guns" as a method of accomplishing this. This was a great idea in a military environment where guns were an effective power-equalizer, being some of the best weapons available while still cheap enough that even commoners could afford them. But with the rise of digital surveillance and propaganda and vastly better physical weapons systems, guns have become irrelevant as a check on the government's power, plausibly lowering our overall freedom due to their impacts on crime and policing. Yet libertarians continue staunchly defending them above everything else, because to them libertarianism isn't about freedom anymore, it's now about the guns. * People wanted to prevent child abuse and human trafficking, so they adopted the simpler goal of banning child pornography in order to lower the financial incentive to abuse children. Then AI-generated pornography comes along, with the potential to dramatically reduce this suffering by removing that incentive completely. But people oppose this, because "prevent children from being raped" stopped being their goal, and "prevent pixels from being put in arrangements that look like children being raped" is more important to them. I don't know how to solve this problem. Human value systems are deeply flexible; this allows us to learn and grow, but also means we're prone to being corrupted. It's all too easy to start out on a well-intentioned quest, but be changed by the experience into something we would never have endorsed at the beginning.
Everyone seems to think one of my examples is horribly wrong, but they can't seem to agree on which one. :) It's certainly possible that some are wrong; the world is complicated, and I don't claim to have a full understanding of any of these four things. But before dismissing the validity of one of these examples because it feels obviously wrong to you, consider whether this effect may have happened to you. Is the lab-grown meat/carbon taxes/guns/child pornography that you oppose *really* an inherently bad thing? Or do you only feel that way due to its close association with something else that's also bad?
@IsaacKing314 "These people presumably started out opposing animal cruelty" What if this is incorrect?
@IsaacKing314 great read, except that part about child porn. the end goal isn't just to prevent child abuse, it's also to absolutely kill the idea of it. allowing the sickness to proliferate just because "it's AI." is almost as bad. we can and must reject both.
@IsaacKing314 Had me till the child porn part. Not sure where you're headed there. Allowing AI generated material like that would create a cottage industry that would warp the minds of those who consumed it.
@IsaacKing314 No I don't think that's what happened It was always about virtue signalling, and you can't virtue signal with artificial meat
@IsaacKing314 Do sometimes wonder how many were just like me and hated the sensory experience of eating meat from a very young age. Veganism gave me a framework by which to justify it to my social circle more acceptably than "I don't like any of it."