If WordPress is to survive, Matt Mullenweg must be removed - Josh Collinsworth blog

in quicklink

een geode samenvatting (al is het daqr misschien wat lang voor) van de rare dingen in wordpress-land de laatste tijd:

You don’t hurt users because you’re beefing with their host. You don’t put innocent bystanders in harm’s way.

It no longer matters what this was all about at that point, or whether you were originally right or not. You are irreversibly the bad guy now.

It’s also worth calling out a side effect of this move, which may or may not have been deliberate:

Matt’s actions have ensured his hosting companies are now the only WordPress hosts that can guarantee something like this will never happen to their users.

How to Give Away a Fortune

in quicklink

At a little after 2 p.m., Engelhorn walked into the room, wearing a navy vest, a white collarless shirt, and round metal glasses. She is thirty-two, with a voluminous wave of short brown hair. Her left arm is covered in a sleeve of tattoos. “Redistribution means recognizing that wealth comes from society and should return to society,” she told the council members. She spoke of wealth as power—a power she didn’t earn and doesn’t want.

Creativity is made, not generated — Procreate®

in quicklink

Generative AI is ripping the humanity out of things. Built on a foundation of theft, the technology is steering us toward a barren future. We think machine learning is a compelling technology with a lot of merit, but the path generative AI is on is wrong for us.

We're here for the humans. We're not chasing a technology that is a moral threat to our greatest jewel: human creativity.

Procreate komt met een duidelijk statement.

Photographer Disqualified From AI Image Contest After Winning With Real Photo | PetaPixel

in quicklink

A photographer has been disqualified from a picture competition after his real photograph won in the AI image category.

Scarlett Johansson wants answers about ChatGPT voice that sounds like 'Her'

in quicklink

Johansson said that nine months ago Altman approached her proposing that she allow her voice to be licensed for the new ChatGPT voice assistant. He thought it would be "comforting to people" who are uneasy with AI technology.

"After much consideration and for personal reasons, I declined the offer," Johansson wrote.

Just two days before the new ChatGPT was unveiled, Altman again reached out to Johansson's team, urging the actress to reconsider, she said.

But before she and Altman could connect, the company publicly announced its new, splashy product, complete with a voice that she says appears to have copied her likeness.

To Johansson, it was a personal affront.

"I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference," she said.

ChatGPT provides false information about people, and OpenAI can’t correct it

in quicklink

In the EU, the GDPR requires that information about individuals is accurate and that they have full access to the information stored, as well as information about the source. Surprisingly, however, OpenAI openly admits that it is unable to correct incorrect information on ChatGPT. Furthermore, the company cannot say where the data comes from or what data ChatGPT stores about individual people. The company is well aware of this problem, but doesn’t seem to care. Instead, OpenAI simply argues that “factual accuracy in large language models remains an area of active research”. Therefore, noyb today filed a complaint against OpenAI with the Austrian DPA.

The Man Who Killed Google Search

in quicklink

This is the story of how Google Search died, and the people responsible for killing it.

Andrew: 'I have a preprint out estimating how many scholar…' - Mastodon

in quicklink

I have a preprint out estimating how many scholarly papers are written using chatGPT etc? I estimate upwards of 60k articles (>1% of global output) published in 2023. https://arxiv.org/abs/2403.16887

How can we identify this? Simple: there are certain words that LLMs love, and they suddenly start showing up a lot last year. Twice as many papers call something "intricate", big rises for "commendable" and "meticulous".

Atlas Complex

in books OlivieBlake

Het is dat ik al twee delen gelezen had en niet goed boeken onuitgelezen kan laten, maar anders was ik allang gestopt. Rommelig, saaie karakters, schrijftrucjes die niet werken.

Patternmaster

in books OctaviaE.Butler

Het eerst geschreven maar chronologisch laatste boek in de serie. Het schets een interessante wereld, maar gaat nergens heel diep op in. De eerste twee delen vullen de voorgeschiedenis aardig in. Het 'vorige' boek (Clay's ark) legt wel een link tussen de delen, maar is mij te afwijkend voor de serie. Tegelijkertijd mis ik een link tot dit boek, wat vrij abrupt eindigt, en de minste karakterontwikkeling kent.

Domburg

in fotos

(galerij)

AI news that's fit to print

in quicklink

Zach Seward, “the editorial director of AI initiatives at The New York Times” over slechte en goede voorbeelden van het gebruik van AI bij nieuwsplatforms.

People look at tools like ChatGPT and think their greatest trick is writing for you. But, in fact, the most powerful use case for LLMs is the opposite: creating structure out of unstructured prose.

Clay's Ark

in books OctaviaE.Butler

Best een sprong (in tijd, stijl en plot) vanaf de andere delen van deze serie. Lezende dat het chronologisch volgende boek het eerst geschreven is moeten hier wel wat zaken bij elkaar komen.

Bezzle

in books CoryDoctorow

Surviving Sky

in books KritikaRao

Yumi and the Nightmare Painter

in books BrandonSanderson

Of het projectie is of niet, maar het verhaal las als een aanklacht tegen AI-gegenereerde 'kunst'.

Circe

in books MadelineMiller

Jinx

in books MattGemmell

Winter in de Hatertse Vennen

in fotos

(galerij)

Do Users Write More Insecure Code with AI Assistants?

in quicklink

Overall, we find that participants who had access to an AI assistant wrote significantly less secure code than those without access to an assistant. Partici- pants with access to an AI assistant were also more likely to believe they wrote secure code, suggesting that such tools may lead users to be overconfident about security flaws in their code.