A postful of questions

From the readings, what are the ethical, social, and moral issues regarding online censorship? Why would governments limit freedom of speech and how do they go about enforcing these restrictions? Is it ethical or moral for technology companies to follow requests of the host country to suppress freedom of expression? That is, should technology companies comply with censorship requirements of the country they are operating in? Is it ethical or moral for technology companies or developers to provide tools that illegally circumvent such restrictions? Is online censorship a major concern? What role should technology companies play in defending against of enforcing limitations of freedom of speech?

Censorship seems straightforward enough as an ethical issue—censorship is bad, period—but upon further thought it’s more complex and layered than it seems. Because everything that’s uploaded to the internet is practically permanently public, let’s consider the following questions as we unpack the concept of “censorship”:

  • Is it ethical for companies to remove dissenting opinions for authoritarian governments?
  • Is it ethical for companies to remove information broadcasted by terrorism organizations, or about terrorism organizations?
  • Is it ethical for companies to remove discriminatory, provocative, hateful content generated by its users? (read: reddit)
  • Is it ethical for companies to remove leaked/stolen personal photos? (read: celebrity nudes on 4chan)
  • Is it ethical for companies to remove smearing and slander against an individual? Is “the right to be forgotten” ethical?
  • Is it ethical for criminals to claim the right to be forgotten?

Putting these scenarios together, I find the line incredibly difficult to draw. Even if we only consider the situation with terrorism, since the list of terrorist organizations are decided by governments, companies are essentially performing censorship at the request of governments. In situations like Tibet—China considers some Tibetan Buddhist groups terrorism organizations (because of their use of self-immolation), while the U.S. recognizes them as legitimate religious groups—how do we choose?

I want to close with a more personal, intimate scenario. Here’s a question posed by a Notre Dame professor, and I’ve been thinking about it for a while without getting to a conclusion. We’ve all posted things that we later regret—is “regret” a good reason to remove anything from the internet? When things are published on the Internet, who owns them?

amc-wiping-digital-history.png
Thanks Ann-Marie

A deepening gap

From the readings, how is automation impacting employment? What are the social, political, and economic implications of replacing human labor with automation on a massive scale? Were the Luddites right about technology and jobs? Should we halt development of automation technology or at least temper it to ensure employment of human laborers, or does automation free humans for other endeavors? Is a Universal Basic Income a viable means of addressing the concerns over loss of employment due to mass automation or is it unnecessary? How does society deal with a future with mass automation and lower employment (or is this not really a problem)? Is automation ultimately a good thing or a bad thing for humanity? What are the ethical implications for those who develop and utilize these automation technologies?

It’s been a historical trend that technological advancements eliminate some jobs while creating others, but it’s still a debated issue whether these advancements cause long-term structural unemployment. We’re seeing arguments from both sides of the debate: some think that new technology will inevitably damage the economy by eliminating jobs; others have a more positive outlook, and believe these advancements will ultimately create new markets and new demands, which then generate new jobs. But regardless of who is right, it seems that technological advancement will eliminate more low-skilled jobs: throughout the 20th century, machines and automation generally replaced low-skill jobs, while benefited high-skill workers. In the new era of AI, I believe this trend will continue, and although I’m happy with the macro-scale changes it will bring (higher-level jobs, higher demand for goods, better economy), I’m worried about the micro-scale consequences it brings, specifically the deepening socioeconomic gap for societies, as well as the potential challenge it brings for education.

Douglas Hofstadter postulated in his famous Gödel, Escher, Bach that there are two main ways for an organism to exhibit intelligence: the Mechanical mode, where the organism mechanically applies a set of rules (calculating 1+1=2 by applying the rule), and the Intelligent mode, where the organism “thinks outside of the box” and analyzes the rules themselves (understanding “one apple and another apple together are two apples”). He argues that the main difference between human and machine is that humans exhibit Intelligent-mode intelligence, while machines are only Mechanically intelligent. (Artificial intelligence is an attempt for machines to emulate Intelligent-mode, but they still seem to be confined within a certain system.)

If we accept this theory, then it seems that a boom of mechanical automation will replace the more mechanical, low-skill jobs. This seems problematic: low-skill jobs are commonly held by people from lower socioeconomic background, since they don’t have access to education that enables high-skill, high-compensation jobs that involve Intelligent-mode, out-of-the-box thinking. With automation eliminating these jobs, it will be problematic to help these displaced people find new jobs that require a different skill set. A universal basic income will at least alleviate this issue—without having to spend long hours on low-skill jobs just so that they earn enough to survive, they may be able to acquire new skills that are necessary for higher-skill, creative jobs.

But what’s more concerning is its impact on education. I couldn’t find enough data to back up my theory, so all of this is going to be a hand-wavy thought experiment: with any technical and scientific advancement, we need to acquire new skills to appreciate and utilize their fruits, so that stretches the duration of education required for the high-skill jobs. Today a college degree is required for most high-skill jobs, which translates to about 16 years of education in total (and we’re 21-22 years old when we enter the workforce). In 1900, the average life expectancy of the world is no more than 40 years, so people naturally wouldn’t need to receive as long as 16 years of education to qualify for then–high-skill jobs. The collective human accumulation of technical knowledge has required more education for anyone to be able to qualify as “high-skill,” so in the future, as we keep accumulating knowledge, we’ll require longer education for anyone working at the cutting edge—people cannot afford college education as of today, so how do we deal with a future where we need, say, 10 years of college before producing any meaningful work? Especially for people at a disadvantage today, the gap is only going to deepen.