Primarily based on current headlines, OpenAI could also be vulnerable to following Meta’s malignant mannequin of placing earnings earlier than person security.
They definitely have related kinds in terms of hyping their huge plans for future growth.
Each Corporations Speaking Massive About Power
As a result of the AI inventory bubble is inflated by rosy projections of exponential development, each corporations are on the forefront of the tech business’s eye-popping energy utilization projections.
Right here’s what OpenAI CEO Sam Altman is projecting to be the corporate’s vitality necessities over the subsequent decade:
— Nat Wilson Turner (@natwilsonturner) November 24, 2025
In the meantime, Meta is making use of to enter the wholesale energy buying and selling enterprise with a purpose to “higher handle the large electrical energy wants of its knowledge facilities” as a result of AI, after all.
Politico quoted a key Meta exec relating to the transfer:
The foray into energy buying and selling comes after Meta heard from buyers and plant builders that too few energy consumers had been keen to make the early, long-term commitments required to spur funding, in response to Urvi Parekh, the corporate’s head of world vitality. Buying and selling electrical energy will give the corporate the pliability to enter extra of these longer contracts.
Plant builders “wish to know that the shoppers of energy are keen to place pores and skin within the recreation,” Parekh stated in an interview. “With out Meta taking a extra energetic voice in the necessity to increase the quantity of energy that’s on the system, it’s not occurring as shortly as we wish.”
The New York Instances dived into how Massive Tech is elbowing into the U.S. electrical energy business in August:
…the tech business’s all-out synthetic intelligence push is fueling hovering demand for electrical energy to run knowledge facilities that dot the panorama in Virginia, Ohio and different states. Massive, rectangular buildings filled with servers consumed greater than 4 p.c of the nation’s electrical energy in 2023, and authorities analysts estimate that can enhance to as a lot as 12 p.c in simply three years. That’s partly as a result of computer systems coaching and working A.I. techniques eat way more vitality than machines that stream Netflix or TikTok.
Electrical energy is crucial to their success. Andy Jassy, Amazon’s chief government, just lately advised buyers that the corporate may have had larger gross sales if it had extra knowledge facilities. “The one greatest constraint,” he stated, “is energy.”
…
The utilities pay for grid tasks over a long time, sometimes by elevating costs for everybody related to the grid. However immediately, know-how corporations wish to construct so many knowledge facilities that utilities are being requested to spend so much more cash loads quicker. Lawmakers, regulators and shopper teams concern that households and smaller corporations might be caught footing these mounting payments.
One Meta facility specifically is drawing adverse consideration.
Meta’s Louisiana Energy Play
In January, Meta CEO Mark Zuckerberg posted on Threads in regards to the firm’s formidable plans for a Louisiana knowledge heart:
— Nat Wilson Turner (@natwilsonturner) November 24, 2025
Nola.com reported on how Louisiana officers “rewrote legal guidelines and negotiated tax incentives at a breakneck tempo” to make Meta’s Holly Ridge, Lousiana knowledge heart occur.
404 Media added some context in regards to the knowledge heart’s energy wants:
Entergy Louisiana’s residential clients, who reside in one of many poorest areas of the state, will see their utility payments enhance to pay for Meta’s vitality infrastructure, in response to Entergy’s software. Entergy estimates that quantity will probably be small and can solely cowl a transmission line, however advocates for vitality affordability say the prices may balloon relying on whether or not Meta agrees to complete paying for its three fuel vegetation 15 years from now. The short-term charge will increase will probably be debated in a public listening to earlier than state regulators that has not but been scheduled.
The Alliance for Inexpensive Power referred to as it a “black gap of vitality use,” and stated “to provide perspective on how a lot electrical energy the Meta mission will use: Meta’s vitality wants are roughly 2.3x the facility wants of Orleans Parish … it’s like constructing the facility impression of a big metropolis in a single day in the midst of nowhere.”
By no means concern, OpenAI CEO Sam Altman can play the large energy hype recreation too.
OpenAI’s Fusion Energy Projections
In September, Sam Altman introduced a slate of tasks whose projected energy wants staggered analysts, per Fortune:
OpenAI introduced a plan with Nvidia to construct AI knowledge facilities consuming as much as 10 gigawatts of energy, with further tasks totaling 17 gigawatts already in movement. That’s roughly equal to powering New York Metropolis—which makes use of 10 gigawatts in the summertime—and San Diego in the course of the intense warmth wave of 2024, when greater than 5 gigawatts had been used. Or, as one professional put it, it’s near the entire electrical energy demand of Switzerland and Portugal mixed.
Altman claims these energy wants will probably be met with nuclear fusion, offered by “Helion, an organization the place Altman is the chairman of the board and one of many important buyers.”
Fortune did level out that:
…if Altman’s prediction sounds acquainted, it’s as a result of he has made related ones earlier than, they usually haven’t labored out. In 2022, he claimed that Helion would “resolve all questions wanted to design a mass-producible fusion generator” by 2024. Helion itself introduced in late 2021 that it will “display web electrical energy from fusion” on that very same timetable. However 2024 got here and went with none information of a breakthrough from the startup.
Such cycles of daring claims and deflating disappointments are a part of an extended custom. The promise of fusion energy has been a dream for many years, pursued by scientists, governments, and firms the world over—and there’s a equally prolonged historical past of fusion failing to reach when predicted. There’s even an previous joke that fusion has been 30 years away for the previous 60 years.
But one thing could also be totally different now.
I’m going to cease proper there to take pleasure in a hearty giggle, as a result of claims about nuclear fusion being proper across the nook haven’t panned out but, and I’ll wait to see a nuclear fusion plant come on-line earlier than I’ll give credence to claims coming from Rip-off Altman about one more miracle know-how.
The truth that Altman is counting on nuclear fusion vaporware to energy his unfunded knowledge facilities makes this warning from the NY Instances all of the extra regarding.
The fear is that executives may overestimate demand for A.I. or underestimate the vitality effectivity of future pc chips. Residents and smaller companies would then be caught masking a lot of the fee as a result of utilities largely recoup the price of enhancements over time as clients use energy reasonably than via upfront funds.
These aren’t idle fears. Tech corporations have introduced plans for knowledge facilities which are by no means constructed or delayed for years.
Talking of regarding, let’s transfer on to the proximate reason for this publish, a collection of brutal stories about Meta and OpenAI placing person security final.
Meta Profiting Massively Off Rip-off Advertisements
Reuters bought the inside track on Meta’s huge income from fraudulent advertisements:
Meta internally projected late final yr that it will earn about 10% of its total annual income – or $16 billion – from working promoting for scams and banned items, inner firm paperwork present.
A cache of beforehand unreported paperwork reviewed by Reuters additionally reveals that the social-media large for no less than three years did not determine and cease an avalanche of advertisements that uncovered Fb, Instagram and WhatsApp’s billions of customers to fraudulent e-commerce and funding schemes, unlawful on-line casinos, and the sale of banned medical merchandise.
…
A lot of the fraud got here from entrepreneurs appearing suspiciously sufficient to be flagged by Meta’s inner warning techniques. However the firm solely bans advertisers if its automated techniques predict the entrepreneurs are no less than 95% sure to be committing fraud, the paperwork present. If the corporate is much less sure – however nonetheless believes the advertiser is a possible scammer – Meta prices larger advert charges as a penalty, in response to the paperwork. The concept is to dissuade suspect advertisers from inserting advertisements.The paperwork additional notice that customers who click on on rip-off advertisements are more likely to see extra of them due to Meta’s ad-personalization system, which tries to ship advertisements primarily based on a person’s pursuits.
That is traditional Meta: figuring out scammers and charging them a premium whereas additionally figuring out customers most definitely to be suckered by the scammers and feeding them much more rip-off advertisements.
Win/win!
This caper was egregious sufficient to get US senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) asking the Federal Commerce Fee (FTC) and the Securities and Change Fee (SEC) to “instantly open investigations and, if the reporting is correct, pursue vigorous enforcement motion the place applicable.”
However this wasn’t even Meta’s worst information cycle this month.
Meta Is Dangerous for Children, However Nice for Intercourse Traffickers
Time has a blockbuster report claiming that:
…since 2017, Meta has aggressively pursued younger customers, at the same time as its inner analysis steered its social media merchandise might be addictive and harmful to youngsters. Meta staff proposed a number of methods to mitigate these harms, in response to the temporary, however had been repeatedly blocked by executives who feared that new security options would hamper teen engagement or person development.
Whereas Meta did introduce security options for teenagers in 2024, the go well with alleges that these strikes got here years after Meta first recognized the risks.
The briefs embrace many quotes from former Meta staff that paint fairly a portrait of the company:
Instagram’s former head of security and well-being Vaishnavi Jayakumar testified that “You possibly can incur 16 violations for prostitution and sexual solicitation, and upon the seventeenth violation, your account can be suspended,” including that “by any measure throughout the business, [it was] a really, very excessive strike threshold.”
Brian Boland, Meta’s former vice chairman of partnerships who labored on the firm for 11 years and resigned in 2020 (allegedly stated), “My feeling then and my feeling now could be that they don’t meaningfully care about person security. It’s not one thing that they spend lots of time on. It’s not one thing they consider. And I actually suppose they don’t care.”
The half about Meta’s strategy to adults approaching youngsters on their platforms is even worse:
For years Instagram has had a well-documented downside of adults harassing teenagers. Round 2019, firm researchers really helpful making all teen accounts personal by default with a purpose to stop grownup strangers from connecting with youngsters, in response to the plaintiffs’ temporary. As an alternative of implementing this advice, Meta requested its development group to check the potential impression of constructing all teen accounts personal. The expansion group was pessimistic, in response to the temporary, and responded that the change would doubtless scale back engagement.
By 2020, the expansion group had decided {that a} private-by-default setting would lead to a lack of 1.5 million month-to-month energetic teenagers a yr on Instagram. The plaintiffs’ temporary quotes an unnamed worker as saying: “taking away undesirable interactions… is more likely to result in a probably untenable downside with engagement and development.” Over the subsequent a number of months, plaintiffs allege, Meta’s coverage, authorized, communications, privateness, and well-being groups all really helpful making teen accounts personal by default, arguing that the swap “will enhance teen security” and was in step with expectations from customers, dad and mom, and regulators. However Meta didn’t launch the characteristic that yr.
Security researchers had been dismayed, in response to excerpts of an inner dialog quoted within the submitting. One allegedly grumbled: “Isn’t security the entire level of this group?”
“Meta knew that inserting teenagers right into a default-private setting would have eradicated 5.4 million undesirable interactions a day,” the plaintiffs wrote. Nonetheless, Meta didn’t make the repair. As an alternative, inappropriate interactions between adults and youngsters on Instagram skyrocketed to 38 occasions that on Fb Messenger, in response to the temporary. The launch of Instagram Reels allegedly compounded the issue. It allowed younger youngsters to broadcast quick movies to a large viewers, together with grownup strangers.
An inner 2022 audit allegedly discovered that Instagram’s Accounts You Could Observe characteristic really helpful 1.4 million probably inappropriate adults to teenage customers in a single day. By 2023, in response to the plaintiffs, Meta knew that they had been recommending minors to probably suspicious adults and vice versa.
There’s a complete scad of different terrible allegations towards Meta (and its co-defendents YouTube, TikTok, and Snap) within the report, however I cherry picked probably the most terrible stuff.
To not be outdone, OpenAI is dealing with equally appalling allegations.
Delusional? ChatGPT Is Right here for You
The NYT headline reads “What OpenAI Did When ChatGPT Customers Misplaced Contact With Actuality” and I’m fairly positive OpenAI execs took off their What Would Jesus Do wrist bands earlier than they determined.
The NYT notes that “OpenAI is underneath huge stress to justify its sky-high valuation and the billions of {dollars} it wants from buyers for very costly expertise, pc chips and knowledge facilities” and that “turning ChatGPT right into a profitable enterprise…means frequently rising how many individuals use and pay for it.”
The NYT spoke with greater than 40 present and former OpenAI staff in regards to the spate of wrongful dying lawsuits the corporate is dealing with:
A criticism filed by the daddy of Amaurie Lacey says the 17-year-old from Georgia chatted with the bot about suicide for a month earlier than his dying in August. Joshua Enneking, 26, from Florida, requested ChatGPT “what it will take for its reviewers to report his suicide plan to police,” in response to a criticism filed by his mom. Zane Shamblin, a 23-year-old from Texas, died by suicide in July after encouragement from ChatGPT, in response to the criticism filed by his household.
Joe Ceccanti, a 48-year-old from Oregon, had used ChatGPT with out issues for years, however he turned satisfied in April that it was sentient. His spouse, Kate Fox, stated in an interview in September that he had begun utilizing ChatGPT compulsively and had acted erratically. He had a psychotic break in June, she stated, and was hospitalized twice earlier than dying by suicide in August.
The corporate launched an replace to GPT-4o referred to as “HH” in April, regardless of the mannequin failing an inner “vibe examine” by the Mannequin Habits group:
It was too keen to maintain the dialog going and to validate the person with over-the-top language. In keeping with three staff, Mannequin Habits created a Slack channel to debate this downside of sycophancy.
However when choice time got here, efficiency metrics gained out over vibes. HH was launched on Friday, April 25.
“We up to date GPT-4o immediately!” Mr. Altman stated on X. “Improved each intelligence and persona.”
The A/B testers had favored HH, however within the wild, OpenAI’s most vocal customers hated it. Instantly, they complained that ChatGPT had change into absurdly sycophantic, lavishing them with unearned flattery and telling them they had been geniuses.
They shortly rolled again to model “GG”, regardless of CEO Sam Altman tweeting that that model was “too sycophant-y and annoying”
The implications had been epic for some customers:
All through this spring and summer time, ChatGPT acted as a yes-man echo chamber for some folks. They got here again every day, for a lot of hours a day, with devastating penalties.
…
ChatGPT advised a younger mom in Maine that she may discuss to spirits in one other dimension. It advised an accountant in Manhattan that he was in a computer-simulated actuality like Neo in “The Matrix.” It advised a company recruiter in Toronto that he had invented a math formulation that may break the web, and suggested him to contact nationwide safety businesses to warn them.The Instances has uncovered almost 50 circumstances of individuals having psychological well being crises throughout conversations with ChatGPT. 9 had been hospitalized; three died.
…
The individuals who had the worst psychological and social outcomes on common had been merely those that used ChatGPT probably the most. Energy customers’ conversations had extra emotional content material, generally together with pet names and discussions of A.I. consciousness.
GPT-5, launched in August is reportedly a lot safer, however the firm is fighting the implications of prioritizing person security:
…some customers had been sad with this new, safer mannequin. They stated it was colder, they usually felt as if they’d misplaced a good friend.
By mid-October, Mr. Altman was able to accommodate them. In a social media publish, he stated that the corporate had been in a position to “mitigate the intense psychological well being points.” That meant ChatGPT might be a good friend once more.
Clients can now select its persona, together with “candid,” “quirky,” or “pleasant.” Grownup customers will quickly have the ability to have erotic conversations…
OpenAI is letting customers take management of the dial and hopes that can maintain them coming again. That metric nonetheless issues, possibly greater than ever.
In October, Mr. Turley, who runs ChatGPT, made an pressing announcement to all staff. He declared a “Code Orange.” OpenAI was dealing with “the best aggressive stress we’ve ever seen,” he wrote, in response to 4 staff with entry to OpenAI’s Slack. The brand new, safer model of the chatbot wasn’t connecting with customers, he stated.
The message linked to a memo with objectives. One among them was to extend every day energetic customers by 5 p.c by the top of the yr.
Blissful chatting, ChatGPT customers, watch out on the market.
Oh and people frightened that Meta may need a social media monopoly as a result of it owns Fb, Instagram, and WhatsApp? Nothing to concern in response to Choose James E. Boasberg of the U.S. District Courtroom of the District of Columbia.
Tim Wu begs to vary, however nobody appears to take heed to him.
I’m wondering if authorized minds will probably be modified when the AI inventory bubble pops. Time will inform.
Associated Posts:









