As we watch major tech platforms evolve over time, it’s clear that companies like Facebook, Apple, Google and Amazon (among others) have created businesses that are having a huge impact on humanity — sometimes positive and other times not so much.
That suggests that these platforms have to understand how people are using them and when they are trying to manipulate them or use them for nefarious purposes — or the companies themselves are. We can apply that same responsibility filter to individual technologies like artificial intelligence and indeed any advanced technologies and the impact they could possibly have on society over time.
This was a running theme this week at the South by Southwest conference in Austin, Texas.
The AI debate rages on
While the platform plays are clearly on the front lines of this discussion, tech icon Elon Musk repeated his concerns about AI running amok in a Q&A at South by Southwest. He worries that it won’t be long before we graduate from the narrow (and not terribly smart) AI we have today to a more generalized AI. He is particularly concerned that a strong AI could develop and evolve over time to the point it eventually matches the intellectual capabilities of humans. Of course, as TechCrunch’s Jon Shieber wrote, Musk sees his stable of companies as a kind of hedge against such a possible apocalypse.
“Narrow AI is not a species-level risk. It will result in dislocation… lost jobs… better weaponry and that sort of thing. It is not a fundamental, species-level risk, but digital super-intelligence is,” he told the South by Southwest audience.
He went so far as to suggest it could be more of a threat than nuclear warheads in terms of the kind of impact it could have on humanity.
Whether you agree with that assessment or not, or even if you think he is being somewhat self-serving with his warnings to promote his companies, he could be touching upon something important about corporate responsibility around the technology that startups and established companies alike are should heed.
It was certainly on the mind of Apple’s Eddy Cue, who was interviewed on stage at SXSW by CNN’s Dylan Byers this week. “Tech is a great thing and makes humans more capable, but in of itself is not for good. People who make it, have to make it for good,” Cue said.
We can be sure that Twitter’s creators never imagined a world where bots would be launched to influence an election when they created the company more than a decade ago. Over time though, it becomes crystal clear that Twitter, and indeed all large platforms, can be used for a variety of motivations, and the platforms have to react when they think there are certain parties who are using their networks to manipulate parts of the populace.
Cue dodged any of Byers’ questions about competing platforms, saying he could only speak to what Apple was doing because he didn’t have an inside view of companies like Facebook and Google (which he didn’t ever actually mention by name). “I think our company is different than what you’re talking about. Our customers’ privacy is of utmost importance to us,” he said. That includes, he said, limiting the amount of data they collect because they are not worrying about having enough to serve more meaningful ads. “We don’t care where you shop or what you buy,” he added.
Andy O’Connell from Facebook’s Global Policy Development team, speaking on a panel on the challenges of using AI to filter “fake news” said, that Facebook recognizes it can and should play a role if it sees people manipulating the platform. “This is a whole society issue, but there are technical things we are doing and things we can invest in [to help lessen the impact of fake news],” he said. He added that Facebook co-founder and CEO Mark Zuckerberg has expressed it as challenge to the company to make the platform more secure and that includes reducing the amount of false or misleading news that makes it onto the platform.
Recognizing tech’s limitations
As O’Connell put forth, this is not just a Facebook problem or a general technology problem. It’s a social problem and society as a whole needs to address it. Sometimes tech can help, but, we can’t always look to tech to solve every problem. The trouble is that we can never really anticipate how a given piece of technology will behave or how people use it once we put it out there.
All of this suggests that none of these problems, some of which we never could have never have even imagined, are easy to solve. For every action and reaction, there can be another set of unintended consequences, even with the best of intentions.
But it’s up to the companies who are developing the tech to recognize the responsibility that comes with great economic success or simply the impact of whatever they are creating could have on society. “Everyone has a responsibility [to draw clear lines]. It is something we do and how we want to run our company. In today’s world people have to take responsibility and we intend to do that,” Cue said.
It’s got to be more than lip service though. It requires thought and care and reacting when things do run amok, while continually assessing the impact of every decision.