Friday, May 9, 2025
  • Login
Euro Times
No Result
View All Result
  • Home
  • Finance
  • Business
  • World
  • Politics
  • Markets
  • Stock Market
  • Cryptocurrency
  • Investing
  • Health
  • Technology
  • Home
  • Finance
  • Business
  • World
  • Politics
  • Markets
  • Stock Market
  • Cryptocurrency
  • Investing
  • Health
  • Technology
Euro Times
No Result
View All Result

Is ChatGPT a ‘virus that has been released into the wild’? • TechCrunch

by Connie Loizos
December 10, 2022
in Technology
Reading Time: 6 mins read
A A
0
Home Technology
Share on FacebookShare on Twitter


More than three years ago, this editor sat down with Sam Altman for a small event in San Francisco soon after he’d left his role as the president of Y Combinator to become CEO of the AI company he co-founded in 2015 with Elon Musk and others, OpenAI.

At the time, Altman described OpenAI’s potential in language that sounded outlandish to some. Altman said, for example, that the opportunity with artificial general intelligence — machine intelligence that can solve problems as well as a human — is so great that if OpenAI managed to crack it, the outfit could “maybe capture the light cone of all future value in the universe.” He said that the company was “going to have to not release research” because it was so powerful. Asked if OpenAI was guilty of fear-mongering — Musk has repeatedly called all organizations developing AI to be regulated — Altman talked about dangers of not thinking about “societal consequences” when “you’re building something on an exponential curve.”

The audience laughed at various points of the conversation, not certain how seriously to take Altman. No one is laughing now, however. While machines are not yet as intelligent as people, the tech that OpenAI has since released is taking many aback (including Musk), with some critics fearful that it could be our undoing, especially with more sophisticated tech reportedly coming soon.

Indeed, though heavy users insist it’s not so smart, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering questions like a person that professionals across a range of industries are trying to process the implications. Educators, for example, wonder how they’ll be able to distinguish original writing from the algorithmically generated essays they are bound to receive — and that can evade anti-plagiarism software.

Paul Kedrosky isn’t an educator per se. He’s an economist, venture capitalist and MIT fellow who calls himself a “frustrated normal with a penchant for thinking about risks and unintended consequences in complex systems.” But he is among those who are suddenly worried about our collective future, tweeting yesterday: “[S]hame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society.” Wrote Kedrosky, “I obviously feel ChatGPT (and its ilk) should be withdrawn immediately. And, if ever re-introduced, only with tight restrictions.”

We talked with him yesterday about some of his concerns, and why he thinks OpenAI is driving what he believes is the “most disruptive change the U.S. economy has seen in 100 years,” and not in a good way.

Our chat has been edited for length and clarity.

TC: ChatGPT came out last Wednesday. What triggered your reaction on Twitter?

PK: I’ve played with these conversational user interfaces and AI services in the past and this obviously is a huge leap beyond. And what troubled me here in particular is the casual brutality of it, with massive consequences for a host of different activities. It’s not just the obvious ones, like high school essay writing, but across pretty much any domain where there’s a grammar — [meaning] an organized way of expressing yourself. That could be software engineering, high school essays, legal documents. All of them are easily eaten by this voracious beast and spit back out again without compensation to whatever was used for training it.

I heard from a colleague at UCLA who told me they have no idea what to do with essays at the end of the current term, where they’re getting hundreds per course and thousands per department, because they have no idea anymore what’s fake and what’s not. So to do this so casually — as someone said to me earlier today — is reminiscent of the so-called [ethical] white hat hacker who finds a bug in a widely used product, then informs the developer before the broader public knows so the developer can patch their product and we don’t have mass devastation and power grids going down. This is the opposite, where a virus has been released into the wild with no concern for the consequences.

It does feel like it could eat up the world.

Some might say, ‘Well, did you feel the same way when automation arrived in auto plants and auto workers were put out of work? Because this is a kind of broader phenomenon.’ But this is very different. These specific learning technologies are self catalyzing; they’re learning from the requests. So robots in a manufacturing plant, while disruptive and creating incredible economic consequences for the people working there, didn’t then turn around and start absorbing everything going inside the factory, moving across sector by sector, whereas that’s exactly not only what we can expect but what you should expect.

Musk left OpenAI partly over disagreements about the company’s development, he said in 2019, and he has been talking about AI as an existential threat for a long time. But people carped that he didn’t know what he’s talking about. Now we’re confronting this powerful tech and it’s not clear who steps in to address it.

I think it’s going to start out in a bunch of places at once, most of which will look really clumsy, and people will [then] sneer because that’s what technologists do. But too bad, because we’ve walked ourselves into this by creating something with such consequentiality. So in the same way that the FTC demanded that people running blogs years ago [make clear they] have affiliate links and make money from them, I think at a trivial level, people are going to be forced to make disclosures that ‘We wrote none of this. This is all machine generated.’

I also think we’re going to see new energy for the ongoing lawsuit against Microsoft and OpenAI over copyright infringement in the context of our in-training, machine learning algorithms. I think there’s going to be a broader DMCA issue here with respect to this service.

And I think there’s the potential for a [massive] lawsuit and settlement eventually with respect to the consequences of the services, which, you know, will probably take too long and not help enough people, but I don’t see how we don’t end up in [this place] with respect to these technologies.

What’s the thinking at MIT?

Andy McAfee and his group over there are more sanguine and have a more orthodox view out there that anytime we see disruption, other opportunities get created, people are mobile, they move from place to place and from occupation to occupation, and we shouldn’t be so hidebound that we think this particular evolution of technology is the one around which we can’t mutate and migrate. And I think that’s broadly true.

But the lesson of the last five years in particular has been these changes can take a long time. Free trade, for example, is one of those incredibly disruptive, economy-wide experiences, and we all told ourselves as economists looking at this that the economy will adapt, and people in general will benefit from lower prices. What no one anticipated was that someone would organize all the angry people and elect Donald Trump. So there’s this idea that we can anticipate and predict what the consequences will be, but [we can’t].

You talked about high school and college essay writing. One of our kids has already asked — theoretically! — if it would be plagiarism to use ChatGPT to author a paper.

The purpose of writing an essay is to prove that you can think, so this short circuits the process and defeats the purpose. Again, in terms of consequences and externalities, if we can’t let people have homework assignments because we no longer know whether they’re cheating or not, that means that everything has to happen in the classroom and must be supervised. There can’t be anything we take home. More stuff must be done orally, and what does that mean? It means school just became much more expensive, much more artisanal, much smaller and at the exact time that we’re trying to do the opposite. The consequences for higher education are devastating in terms of actually delivering a service anymore.

What do you think of the idea of universal basic income, or enabling everyone to participate in the gains from AI?

I’m much less strong a proponent than I was pre COVID. The reason is that COVID, in a sense, was an experiment with a universal basic income. We paid people to stay home, and they came up with QAnon. So I’m really nervous about what happens whenever people don’t have to hop in a car, drive somewhere, do a job they hate and come home again, because the devil finds work for idle hands, and there’ll be a lot of idle hands and a lot of deviltry.





Source link

Tags: ChatGPTreleasedTechCrunchVirusWild
Previous Post

Spirit Airlines offers big raises to pilots in new contract

Next Post

Binance and Crypto.com Publish Proof-of-Reserve Audits Conducted by Global Auditor Mazars Group – Bitcoin News

Related Posts

Why Apple is trying to save Google Search in the antitrust fight

Why Apple is trying to save Google Search in the antitrust fight

by David Pierce
May 9, 2025
0

Google is in antitrust court docket, combating to protect the search engine enterprise that has made it so traditionally profitable....

How to watch the Android Show ahead of Google I/O 2025

How to watch the Android Show ahead of Google I/O 2025

by Ian Carlos Campbell
May 9, 2025
0

Google's annual I/O developer convention is approaching Could 20, and for the primary time, there's two main occasions you will...

One of Elon Musk’s longtime VCs is suing his former employer after allegedly being fired

One of Elon Musk’s longtime VCs is suing his former employer after allegedly being fired

by Julie Bort
May 9, 2025
0

Josh Raffaelli, who has deep roots as a Silicon Valley investor and has backed a variety of Elon Musk firms,...

Senate Democrats block the Genius Act after Republicans rebuffed a provision to bar Trump and others from profiting off of crypto ventures while in office (Steven T. Dennis/Bloomberg)

Senate Democrats block the Genius Act after Republicans rebuffed a provision to bar Trump and others from profiting off of crypto ventures while in office (Steven T. Dennis/Bloomberg)

by Euro Times
May 8, 2025
0

Steven T. Dennis / Bloomberg: Senate Democrats block the Genius Act after Republicans rebuffed a provision to bar Trump and...

Bill Gates Explains His Plans to Close the Gates Foundation in 2045

Bill Gates Explains His Plans to Close the Gates Foundation in 2045

by David Wallace-Wells
May 8, 2025
0

Donald Trump is the face of those cuts, however the cruelty of his administration just isn't the one story. After...

Samsung confirms major camera spec for Galaxy S25 Edge before launch – and a free preorder deal

Samsung confirms major camera spec for Galaxy S25 Edge before launch – and a free preorder deal

by Kerry Wan
May 8, 2025
0

Kerry Wan/ZDNETSamsung has been enjoying with my feelings, drip-feeding teasers of its upcoming Galaxy S25 Edge cellphone because it was...

Next Post
Binance and Crypto.com Publish Proof-of-Reserve Audits Conducted by Global Auditor Mazars Group – Bitcoin News

Binance and Crypto.com Publish Proof-of-Reserve Audits Conducted by Global Auditor Mazars Group – Bitcoin News

Wall Street Week Ahead for the trading week beginning December 12th, 2022 : stocks

Wall Street Week Ahead for the trading week beginning December 12th, 2022 : stocks

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Operation Sindoor: India responded to Pak violations responsibly, says Vikram Misri; drone debris being inspected

Operation Sindoor: India responded to Pak violations responsibly, says Vikram Misri; drone debris being inspected

May 9, 2025
Why Apple is trying to save Google Search in the antitrust fight

Why Apple is trying to save Google Search in the antitrust fight

May 9, 2025
Far-right German lawmaker Krah investigated over China bribery claims

Far-right German lawmaker Krah investigated over China bribery claims

May 9, 2025
Monster Beverage Corporation (MNST) Earnings: 1Q25 Key Numbers

Monster Beverage Corporation (MNST) Earnings: 1Q25 Key Numbers

May 9, 2025
Links 5/9/2025 | naked capitalism

Links 5/9/2025 | naked capitalism

May 9, 2025
Heritage Global Inc. (HGBL) Q1 2025 Earnings Call Transcript

Heritage Global Inc. (HGBL) Q1 2025 Earnings Call Transcript

May 9, 2025
Euro Times

Get the latest news and follow the coverage of Business & Financial News, Stock Market Updates, Analysis, and more from the trusted sources.

CATEGORIES

  • Business
  • Cryptocurrency
  • Finance
  • Health
  • Investing
  • Markets
  • Politics
  • Stock Market
  • Technology
  • Uncategorized
  • World

LATEST UPDATES

Operation Sindoor: India responded to Pak violations responsibly, says Vikram Misri; drone debris being inspected

Why Apple is trying to save Google Search in the antitrust fight

  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact us

Copyright © 2022 - Euro Times.
Euro Times is not responsible for the content of external sites.

No Result
View All Result
  • Home
  • Finance
  • Business
  • World
  • Politics
  • Markets
  • Stock Market
  • Cryptocurrency
  • Investing
  • Health
  • Technology

Copyright © 2022 - Euro Times.
Euro Times is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In