Agnostic.com

12 16

LINK Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them

BY WILFRED CHAN

AI pioneer Geoffrey Hinton, a 75-year-old computer scientist known as “the Godfather of AI,” has made waves this week after resigning from Google to warn that AI could soon surpass humans in intelligence and learn how to destroy humanity on its own.

But Hinton’s warnings, while dire, are missing the point, says Meredith Whittaker, a prominent AI researcher who was pushed out of Google in 2019 in part for organizing employees against the company’s deal with the Pentagon to build machine vision technology for military drones. Now the president of the Signal Foundation, Whittaker tells Fast Company why Hinton’s alarmism is a distraction from more pressing threats, and how workers can stand up against technology’s harms from within. (Hinton did not respond to Fast Company’s request for comment.)

This interview has been condensed and edited for clarity.

Fast Company: Let’s start with your reaction to Geoffrey Hinton’s big media tour around leaving Google to warn about AI. What are you making of it so far?

Meredith Whittaker: It’s disappointing to see this autumn-years redemption tour from someone who didn’t really show up when people like Timnit [Gebru] and Meg [Mitchell] and others were taking real risks at a much earlier stage of their careers to try and stop some of the most dangerous impulses of the corporations that control the technologies we’re calling artificial intelligence.

So, there’s a bit of have-your-cake-and-eat-it-too: You get the glow of your penitence, but I didn’t see any solidarity or any action when there were people really trying to organize and do something about the harms that are happening now.

FC: You started organizing within Google in 2017 to oppose Project Maven, a contract the company signed to build machine vision technology for U.S. military drones. Did you anticipate being forced out for speaking up?

MW: I didn’t plan it with that in mind. But after our letter opposing Project Maven blew up, I remember realizing at that moment, based on my understanding of history, that I would be pushed out eventually. You don’t mess with the money like that and not get pushed out.

FC: Did you know Geoffrey Hinton when you were at Google?

MW: Not well, but we were in the same conferences, in the same room sometimes.

FC: And did he express any kind of support when you were organizing?

MW: I didn’t see him show up for any of the rallies or the actions or bottom-line any work. And then when it was getting dicey, when Google started taking the gloves off and hiring a union-busting firm, I didn’t see him come out and support.

But the effectiveness of raising concerns hinges on the ability of people who raise them to be safe. So, if you don’t come out when me and Meg are being fired, if you don’t come out when others are being retaliated against, when research is being suppressed, then you are tacitly endorsing an environment that punishes people for raising concerns. [Editor’s note: While Whittaker says she was forced out of her job by Google, the company claims she opted to leave.]

FC: There’s also a pattern that you’ve been pointing out: A lot of the people being punished for speaking out have been women.

MW: Women and particularly women who aren’t white. And that isn’t just in Google, but in the space where people are discussing artificial intelligence, the folks who have come out earliest and with the most grounded and materially-specific concerns have generally been women, particularly Black women and women of color. I think it’s just notable who seems easy to ignore, and whose concerns are elevated as like: “Well, finally we’re hearing it from the father, so it must be true.”

FC: On CNN recently, Hinton downplayed the concerns of Timnit Gebru—who Google fired in 2020 for refusing to withdraw a paper about AI’s harms on marginalized people—saying her ideas were not as “existentially serious” as his own. What do you make of that?

MW: I think it’s stunning that someone would say that the harms [from AI] that are happening now—which are felt most acutely by people who have been historically minoritized: Black people, women, disabled people, precarious workers, et cetera—that those harms aren’t existential.

What I hear in that is, “Those aren’t existential to me. I have millions of dollars, I am invested in many, many AI startups, and none of this affects my existence. But what could affect my existence is if a sci-fi fantasy came to life and AI were actually super intelligent, and suddenly men like me would not be the most powerful entities in the world, and that would affect my business.”

FC: So, we shouldn’t be worried that AI will come to life and wipe out humanity?

MW: I don’t think there’s any evidence that large machine learning models—that rely on huge amounts of surveillance data and the concentrated computational infrastructure that only a handful of corporations control—have the spark of consciousness.

We can still unplug the servers, the data centers can flood as the climate encroaches, we can run out of the water to cool the data centers, the surveillance pipelines can melt as the climate becomes more erratic and less hospitable.

I think we need to dig into what is happening here, which is that, when faced with a system that presents itself as a listening, eager interlocutor that’s hearing us and responding to us, that we seem to fall into a kind of trance in relation to these systems, and almost counterfactually engage in some kind of wish fulfillment: thinking that they’re human, and there’s someone there listening to us. It’s like when you’re a kid, and you’re telling ghost stories, something with a lot of emotional weight, and suddenly everybody is terrified and reacting to it. And it becomes hard to disbelieve.

FC: What you said just now—the idea that we fall into a kind of trance—what I’m hearing you say is that’s distracting us from actual threats like climate change or harms to marginalized people.

MW: Yeah, I think it’s distracting us from what’s real on the ground and much harder to solve than war-game hypotheticals about a thing that is largely kind of made up. And particularly, it’s distracting us from the fact that these are technologies controlled by a handful of corporations who will ultimately make the decisions about what technologies are made, what they do, and who they serve. And if we follow these corporations’ interests, we have a pretty good sense of who will use it, how it will be used, and where we can resist to prevent the actual harms that are occurring today and likely to occur.

FC: Geoffrey Hinton also said on CNN, “I think it’s easier to voice your concerns if you leave the company first.”

MW: If personal ease is your metric, then you’re not wrong. But I don’t think it’s more effective. This is one of the reasons that myself and so many others turn to labor organizing. Because there’s a lot of power and being able to withhold your labor collectively, and joining together as the people that ultimately make these companies function or not, and say, “We’re not going to do this.” Without people doing it, it doesn’t happen.

HippieChick58 9 Sep 17
Share

Enjoy being online again!

Welcome to the community of good people who base their values on evidence and appreciate civil discourse - the social network you will enjoy.

Create your free account

12 comments

Feel free to reply to any comment by clicking the "Reply" button.

2

or the corporations that won't control AI. The only real problem on the planet is human beings.

3

AI, in its current state at best, is only able to produce a sophisticated imitation of what humans can produce. For instance, AI can pass the bar exam, but it can't actually practice law, because it can't actually reason, but can only imitate, even if the imitation is highly sophisticated, it is still no substitute for human thought.

I so expect however, that some corporations who over estimate AI's actual abilities will rely on AI too heavily, which will probably cause major losses if not bankruptcies.

AI is a shiny new toy, there will be a rush to use it, without too much actual thought about if it is appropriately ready to do certain jobs... which will most likely be disastrous.

Like the Word Wide Web.
Tim Berners-Lee is the father of the World Wide Web, and he admits that he created a monster.

the real concern is how far it could be developed. From those expressing concern, there seems to be at least a valid concern that this could get out of hand. What AI on any given server is "thinking" cannot be observed, measured or quantified as it happens. 2001 A Space Odyssey. Should "Omega Corp" or a less scrupulous government possess significant AI power, who knows what could happen. Now is the time to try to get ahead of this, though who knows how !!!.

3

The corporations are always going to try and maximize and privatize profits while shifting the liabilities (cleanup costs) on the public. That is a given. And there is always a cleanup cost, or some kind of negative consequence that they (the corporations) both precipitate and avoid responsibility for. With AI, the consequence could be something that no corporation, no nation, no species can remediate. The danger here is an existential one. We are opening Pandora's box with AI. A village full of irate citizens armed with pitchforks could take down Frankenstein's monster. AI could make the villagers armed with AR-15s take themselves down. We already have climate change threatening human civilization. It will take cooperation and ingenuity to get past that. Kind of tough if everybody is shooting at each other, blowing up infrastructure, and generally burning down the house.

There is nothing at this point that’s going to stop the worst from happening. It is only a matter of when. But of course there will always be those who resort to their coping mechanisms telling themselves. ‘But there’s always hope’.

There isn’t any. And mostly it’s the religious that have that mentality.

@CuddyCruiser No matter how bad things get, it could always be worse. But yeah, the potential for very shitty is really high.

4

Humans are always behind the abuse of any technology.

Ryo1 Level 8 Sep 18, 2023

This AI is going to end humanity one day for that very reason. Plus many others.

3

Just as I said about Genetically Modified Organisms: It's not the technology that scares me, it's the fact that corporations will be in charge of it.

3

Al - WOW! This is sooooooooo great, let's have fun! The non linear no critical thinking folks.

What could possibly go wrong? The rest of us.

3

I see too much sci-fi fandom in the below comments.

3

Or some nut will use an AI program to create a doomsday device in their garage. I'm pretty sure intelligent life such as ours is an evolutionary dead end.

4

What would be the objective to ending humanity?

To raise the Universe's collective IQ several points?

6

This looks like a good place to hopefully entertain folks.
Who's to say where AI might turn.
I fear we are about to learn.
Perhaps the number of humans it will reduce.
Is that something we want to produce?
For the good old days we will sorely yearn!

For the good of the planet brutal efficiency.
With a conscious deficiency.
Could be game over for man.
Many safeguards programmers must plan.
Beware should it attain self sufficiency.

Years down the road source code lost.
Could carry a hell of a cost.
Won't happen in my time.
It makes for an interesting theoretical rhyme.
By electrons you could be bossed!

Excellent!! well done and executed. Should post it in many places, at least where you can find people who still can think.

4

Definitely agree. Every powerful technology can be destructive and they are always at their worst when deliberately enhanced by humans. Fire, atomic energy and explosives,for example; militaries are already are leaders in IT.

5

This AI technology will eventually be one of several ways bringing down human civilization.

Younger people are not having children the way past generations have, for one. Preventable diseases are happening earlier and earlier in life now, many happening to teenagers and those in their 20s.

Electromagnetic chaos from our IPhones, Tablets, Personal computers etc. are another factor contributing. Ultra processed and processed food diets are another factor, as well as poor air quality. And look at the dropping fertility rates over the last 20 or 30 years also.

Given all this, I can only conclude that one day humans will become extinct, as well as all life on the planet. Probably sometime in the next 50 to 100 years at most.

Write Comment
You can include a link to this post in your posts and comments by including the text q:731197
Agnostic does not evaluate or guarantee the accuracy of any content. Read full disclaimer.