by Justin Sherman
In philosophy and psychology, the naturalistic fallacy refers to an often incorrect inference: that because something is natural, it is therefore good or morally acceptable. Marketers use this to their advantage (e.g., think organic food products), and it also influences our personal lives. A similar phenomenon occurs with digital technology. Just as we tend to romanticize Silicon Valley as a progressive utopia focused on social good, we also tend to classify any and all digital innovation (and "innovation") as inherently positive. When the Internet went global, it was perceived -- by many Western democracies -- as an inherently democratic system that pushed free, open discourse regardless of end user. When smartphones went mainstream, they were heralded as uplifters of all persons across societies, from the rural farmer in a developing country to the businessman in an urban metropolis. Even modern advancements in machine learning, the blockchain, and quantum computing are often portrayed as purely-positive game-changers. As with any element of society, the best policymakers are -- and will be -- the ones who realize that not all cyber innovation is inherently good, and not all digital technology is going to revolutionize humanity for the better. The approach is best when practical, not idealistic. The Internet has enabled the spread of hate speech, malware, disinformation, and child pornography alongside free press. Smartphones have caused disruptions to sleep patterns and possible addiction in teenagers just as much as they have enhanced global communication. Machine learning carries with it enormous bias that can, for instance, disparately sentence black and Hispanic men to longer stays in prison; blockchain systems have produced, in some cases, extremely adverse impacts on the global climate; and quantum computing threatens to break all public key encryption that holds the Internet together. And this is barely scratching the surface of the ethical issues that come with tech innovation and respective policies. We should not be technophobic -- not by any means -- but we don't need policymakers in shock that Facebook disrespected user privacy, either. Thus, we can no longer afford to teach the leaders of tomorrow -- in elementary school, middle school, high school, college, and beyond -- only about the purely beneficial sides of technology. We must teach about security and ethics; we must incorporate discussions of mental and bodily health; we must evaluate digital innovation's impacts on climate change, political stability, and social justice. In order to prepare tomorrow's leaders for the cyber challenges we face, education must accept and address that not all cyber "innovation" is inherently good. We need pragmatic policies towards innovation. |
Proudly powered by Weebly