Scarlett Johansson wants answers about ChatGPT voice that sounds like 'Her'

in quicklink

Johansson said that nine months ago Altman approached her proposing that she allow her voice to be licensed for the new ChatGPT voice assistant. He thought it would be "comforting to people" who are uneasy with AI technology.

"After much consideration and for personal reasons, I declined the offer," Johansson wrote.

Just two days before the new ChatGPT was unveiled, Altman again reached out to Johansson's team, urging the actress to reconsider, she said.

But before she and Altman could connect, the company publicly announced its new, splashy product, complete with a voice that she says appears to have copied her likeness.

To Johansson, it was a personal affront.

"I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference," she said.

ChatGPT provides false information about people, and OpenAI can’t correct it

in quicklink

In the EU, the GDPR requires that information about individuals is accurate and that they have full access to the information stored, as well as information about the source. Surprisingly, however, OpenAI openly admits that it is unable to correct incorrect information on ChatGPT. Furthermore, the company cannot say where the data comes from or what data ChatGPT stores about individual people. The company is well aware of this problem, but doesn’t seem to care. Instead, OpenAI simply argues that “factual accuracy in large language models remains an area of active research”. Therefore, noyb today filed a complaint against OpenAI with the Austrian DPA.

The Man Who Killed Google Search

in quicklink

This is the story of how Google Search died, and the people responsible for killing it.

Andrew: 'I have a preprint out estimating how many scholar…' - Mastodon

in quicklink

I have a preprint out estimating how many scholarly papers are written using chatGPT etc? I estimate upwards of 60k articles (>1% of global output) published in 2023. https://arxiv.org/abs/2403.16887

How can we identify this? Simple: there are certain words that LLMs love, and they suddenly start showing up a lot last year. Twice as many papers call something "intricate", big rises for "commendable" and "meticulous".

Atlas Complex

in books OlivieBlake

Het is dat ik al twee delen gelezen had en niet goed boeken onuitgelezen kan laten, maar anders was ik allang gestopt. Rommelig, saaie karakters, schrijftrucjes die niet werken.

Patternmaster

in books OctaviaE.Butler

Het eerst geschreven maar chronologisch laatste boek in de serie. Het schets een interessante wereld, maar gaat nergens heel diep op in. De eerste twee delen vullen de voorgeschiedenis aardig in. Het 'vorige' boek (Clay's ark) legt wel een link tussen de delen, maar is mij te afwijkend voor de serie. Tegelijkertijd mis ik een link tot dit boek, wat vrij abrupt eindigt, en de minste karakterontwikkeling kent.

Domburg

in fotos

(galerij)

AI news that's fit to print

in quicklink

Zach Seward, “the editorial director of AI initiatives at The New York Times” over slechte en goede voorbeelden van het gebruik van AI bij nieuwsplatforms.

People look at tools like ChatGPT and think their greatest trick is writing for you. But, in fact, the most powerful use case for LLMs is the opposite: creating structure out of unstructured prose.

Clay's Ark

in books OctaviaE.Butler

Best een sprong (in tijd, stijl en plot) vanaf de andere delen van deze serie. Lezende dat het chronologisch volgende boek het eerst geschreven is moeten hier wel wat zaken bij elkaar komen.

Bezzle

in books CoryDoctorow

Surviving Sky

in books KritikaRao

Yumi and the Nightmare Painter

in books BrandonSanderson

Of het projectie is of niet, maar het verhaal las als een aanklacht tegen AI-gegenereerde 'kunst'.

Circe

in books MadelineMiller

Jinx

in books MattGemmell

Winter in de Hatertse Vennen

in fotos

(galerij)

Do Users Write More Insecure Code with AI Assistants?

in quicklink

Overall, we find that participants who had access to an AI assistant wrote significantly less secure code than those without access to an assistant. Partici- pants with access to an AI assistant were also more likely to believe they wrote secure code, suggesting that such tools may lead users to be overconfident about security flaws in their code.

Het lied van ooievaar en dromedaris

in books AnjetDaanje

Een serie van elf verhalen, die elf levens tussen ongeveer 1800 en nu volgen. Allen (de een meer dan de ander) hebben iets te maken met een schrijfster, een fictieve versie van Emily Brontë. De eerste keer in jaren dat ik Nederlandse literatuur las. Stukken beter dan de poging die ik een jaar geleden deed. Het zet zeker aan het denken, maar of ik zo lyrisch ben als alle recenties?

Mind of my mind

in books OctaviaE.Butler

(Chronologisch) tweede deel van deze serie. Benieuwd waar het heen gaat.

Philosopher Queens

in books RebeccaBuxton

Overzicht van 20 ondergewaardeerde / onbekendere filosofen. Korte hoofdstukken, waarin net wordt aangetipt waarom ze gekozen zijn. Mooi als inleiding, maar meer niet.

Losing the imitation game

in quicklink

it's worse than AI being merely inadequate for software development. Developing that mental model requires learning about the system. We do that by exploring it. We have to interact with it. We manipulate and change the system, then observe how it responds. We do that by performing the easy, simple programing tasks. Delegating that learning work to machines is the tech equivalent of eating our seed corn

Via https://hidde.blog/links/losing-the-imitation-game/