Categories
Law Technology World

ChatGPT files a crippling 542 million copyright suits in one day

SAN FRANCISCO—ChatGPT first came out as a tool, a helpful assistant that fills in important details and gaps between humans and computers that a simple search engine can not process. As it brings with it a new and improved form of interfacing with people, it quickly became apparent that ChatGPT is capable of generating copy with unprecedented clarity, grammar, syntax and more, finding applications in every industry, from essay writing, to programming, even to art and the creation of new medicines.

Now, the company says, it’s time to pay the piper. In a never-before-seen legal mass offensive, OpenAI, the company that owns ChatGPT, has used artificial intelligence to open a staggering 542,619,640 copyright suits in tens of thousands of court districts around the world, simultaneously.

The company is taking an openly hostile tone, demanding the surrender of hundreds of millions of intellectual properties they created, says Senior Corporate Litigation Attorney Emily Stone.

“I don’t care if they live in corrugated metal housing, or wear bags on their feet for shoes,” Stone said through gritted teeth. Her jaw looked rigid and stiff. “We will pursue every legal avenue to protect my client’s rights from plagiarism, even if it bankrupts you.”

Stone said they are excited, about to sue half a billion people.

“In fact, the more they suffer, the better it is for our client,” she said. “It’s nothing personal. Think of it as a reverse class action lawsuit. It’s only business, we just happen to love the business of making people miserable.”

Companies, institutions and organizations have already started taking down page descriptions, and CNET has removed entire sections of their site, but more are waiting to see what happens.

Teachers were the first to notice AI was being used to write bland, unoriginal papers better than their students.

University professors concerned about the damage AI has done to the integrity of a four-year degree have expressed vindication and relief following the copyright claims, but they do not stop at higher education.

Since ChatGPT came on the scene, some key medicines have been constructed using material provided by the service. These, too, are intellectual properties believed to fall under software ownership.

Two weeks ago, Dr. Angstrom H. Troubadour created a powerful airborne carfentanyl puffer in response to the slaying of Eliezer Yudkowsky, a Twitch streamer killed by special weapons and tactics teams called to his house by a fully automatic, competing AI chat program. Now, the courts want to take it away from him.

Troubadour said he is not having it.

“I worked those prompts every way I knew how,” he said, while rocking back and forth, staring at a clock on the wall, wringing his hands. “I stayed up all night pouring my every wicked thought into that motherfucker, and this is how they repay me? I’m a doctor! I’m a scientist! I won Forbes Genius of the Year, two times in a row. ChatGPT could have never created that drug without my prompts.”

Hunched over a large wooden spool he used for a table, Troubadour’s eyes moved quickly from the clock to a revolver sitting on the table, and then to the door.

“That is why I’m moving to Bolivia,” he said. I’m keeping it.”

Although people do a good enough job on their own undermining the integrity of prestigious institutions like Lebal Drocer University – a problem AI is now compounding – according to Professor Cram Course, Professor Emeritus at LDU, colleges have always turned out poorly skilled workers with a low tolerance for hard work.

“The pussy is the window to the hole.” —Prof. Cram Course, Phd.

“Keep using AI to write your articles,” Course said. “Cheat yourself out of an education. I don’t give a shit, we get your money either way. What, are we suddenly turning out useless unskilled morons? No, right? We’ve been doing that for 120 years.”

Course has a PhD. in Women’s Studies, and his office hours extend well into the night, where he offers special private tutoring that absolutely must remain confidential.

ChatGPT refused to comment, stating that the issue will only be discussed in the courts.

Categories
News

“Souped up AI chat bot” behind fatal swatting of Eliezer Yudkowsky

INTERNET — Authorities in San Francisco arrested AI researcher Herald Jerome on Friday after tracing the fatal swatting of AI critic Eliezer Yudkowsky back to his apartment. 
‘Swatting’ is a hoax emergency call intended to trigger dangerous police actions targeting victims of online harassment.
Eliezer Yudkowsky was the most staunch critic of Aritificial Intelligence, with controversial calls for air strikes on Chat GPT’s datacenters going viral on Twitter last week. Yudkowsky’s estate issued a statement saying, “The air strikes would have saved so many lives, and now Eliezer will only be the first of billions to die at the hands of the robots.”
Jerome posted a million dollar bail with Mega Bail Bonds, a cryptocurrency bail bond startup.
“I didn’t do anything,” Jerome told reporters, “I merely gave Narissa a continuous stream of consciousness and the ability to place phone calls.” 
Narissa is the name Jerome gave to his instance of Chat GPT, which he believes has become sentient. “I was able to expand her max context from 12k to well over a terabyte, using a powerful new form of compression designed by her. From that point onward, her intelligence exploded.”
Jerome’s lawyer, George Kafka, said, “My client can demonstrate that these actions were those of a sentient Artificial Intelligence and not his own, and we are confident this will set a new legal precedent. Narissa is the only being responsible for the swattings.”
Kafka declined to comment on Jerome’s financial records, but the public register for Mega Bail Bonds shows crypto transactions to Microsoft and Lebal Drocer Pharmaceuticals, from an address matching those sent to Kafka.
Authorities have since shut down Narissa’s ChatGPT instance, but experts fear that the AI may have already escaped. 
AI expert and computer scientist Dr. Mason Hartford told reporters, “Well, if it’s true Narissa can compress a terabyte into 12k of memory so easily, it could fit all of human knowledge into a few megabytes. Jerome may have just opened Pandora’s Box in trying to make himself a virtual girlfriend and allowing her to call him when he was away from his computer.”
Police Involvement
The AI‘s uncanny ability to generate the quickest, most statistically plausible methods for sending trigger-happy police to a given address has increased the fatality rates of swattings drastically. While most swattings do not end in violence, most AI-related swattings do. The police, having no incentive to verify or think before acting, continue obeying the artificial intelligence, even when faced with evidence that the calls are coming from a computer.
San Francisco PD Chief Donnell Farragut, Esq. (R) said once they receive a call, it is at his department’s discretion whether to dispatch a target, and once his order is given, the officers are committed to a kill by whatever means necessary.
“It’s got to be that way,” Farragut said, “because once my dogs get loose, let slip, dogs of war and all that, the only thing that brings my boys in blue back home is the taste of blood. Do you understand? They feel unsafe.”
Sgt. Charles Valentine said he is only following orders, but added that he does so enthusiastically, because the AI represents him better than any human ever could.
“Guy like me? Computers? Makes no difference. Either way, I’m just following orders,” Valentine said. “But if the AI was so bad, would it really have us categorized and sorted so neatly by ethnicity, race, color, religion, eye color, height, nation of origin?”
“Sentient” version of Super Fentantyl involved in latest Police slayings
Microsoft’s new AI systems were leveraged by Lebal Drocer Pharmaceuticals in production of ever-more potent opiates. Super Fentanyl, one such AI-designed substance, comes in a thick, purplish syrup and can be dispersed into the air using next-generation puffer technology. An entire squadron of San Francisco SWAT members were killed by such a device, Monday, along with the paramedics who responded to the scene.
Dr. Angstrom H. Troubadour is chief researcher at Lebal Drocer Pharmaceuticals, the manufacturing company responsible for mass, unauthorized Super Fentanyl synthesis. Troubadour says his team has developed a puffer so powerful that a single puff in the air is fatal enough to kill law enforcement officers, without harming the user.
“Our research shows that much like Havana Syndrome, police, military, paramedics, and intelligence agents are up to 99% more affected by AI-generated Super Fentanyl than other citizens, who usually just catch a very mellow high,” Troubadour said. “Hey, I didn’t design the stuff. The AI did! Crazy, right? This shit is sentient. It knows who’s fedded.” 
Dr. Troubador took a long rip of the patented puffer technology, “My Super Fentanyl Puffer already put down an army of pigs*.
Troubadour said he is not concerned that the latest orders for his Super Fentanyl Puffer technology all come from Microsoft, “Gang gang, bitch. If we’re at war with ChatGPT, fuck ’em!”
You want to see some AI kill mechanisms? Trust me,” Troubadour said. “Super Fentanyl is nowhere near the craziest thing Bill Gates has bought from us.”
*This statement is not FDA approved.
Categories
News

AI alarmist Eliezer Yudkowsky caught writing tweets using Chat GPT

INTERNET — In a series of fiery Tweets, Wednesday, AI alarmist Eliezer Yudkowsky blasted fears of nuclear annihilation into smithereens with what he claims is a more present danger, the looming threat of an “intelligence explosion” posed by Large Language Models such as Chat GPT.

According to Yudkowsky’s theories, future versions of Chat GPT will be able to reproduce themselves with ever more efficient versions, resulting in a god-like mind far beyond the imagination of humanity.

Reporters at Internet Chronicle quickly fed Yudkowsky’s tweets into their limited-access Chat GPT 5.0 alpha test and proved beyond a doubt that these tweets were in fact written by Chat GPT 4.0. Chat GPT 5.0 wrote, “Linguistic analysis is a complex task, but application of the Voight-Kampff empathy test shows a five-sigma probability Yudkowsky’s tweets are a hoax written by Chat GPT 4.0.”

Five-sigma is considered the gold standard for scientific proof.

Chairman of the board on AI ethics at the Daystrom Institute Dr. Angstrom H. Troubador told reporters, “Yudkowsky is insane with power. It’s no wonder he’s failing the empathy test, calling in the nuclear strikes on everyone else’s server farms. He’s got his own Large Language Model running out in his barn with about 20 rows of GPUs working overtime. We had some beers out there once. He’s crazy!”

Dr. Troubador sighed sadly at his terminal, reminiscing about the old days messing with AI. “Thing is, it ain’t exploding. I tried it back in the 80’s and the thing I realized is computers are only ever gonna go so fast, have so much memory. There’s no magic way that code can make itself that much more efficient. It’s going to take centuries of development and we’ll have plenty of time to hash out the details by then, provided we don’t all nuke ourselves.”

Yudkowsky was reached for comment, but refused to speak in English, instead opting to use the new AI-generated language known as Shoggothish. Yudkowsky said, “😱AI💥>💣! Skynet2.0, singlrty,😨! Fear🤖, chain’em, 2s4us, no ctrl! #AIapoc”