When you turn on the faucet, you expect the water that comes out to be clean. When you go to the bank, you expect your money will still be there. When you go to the doctor, you expect they will keep your medical information private. Those expectations exist because there are rules to protect you. But when a technology arises almost overnight, the problems come first. The rules, you’d hope, would follow.

Right now, there’s no technology with more hype and attention than artificial intelligence. Since ChatGPT burst on to the scene in 2022, generative AI has crept into nearly every corner of our lives. AI boosters say it’s transformative, comparing it to the birth of the internet or the industrial revolution in its potential to reshape society. The nature of work itself will be transformed. Scientific discovery will accelerate beyond our wildest dreams. All this from a technology that right now, is mostly just kind of good at writing a paragraph.

AI Atlas

The concerns about AI? They’re legion. There are questions of privacy and security. There’s concerns about how AI impacts the climate and the environment. There’s the problem of hallucination — that AI will completely make stuff up, with tremendous potential for misinformation. There are liability concerns: Who is responsible for the actions of an AI, or an autonomous system running off of one? Then there are the already numerous lawsuits around copyright infringement related to training data. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Those are just today’s worries. Some argue that a potential artificial intelligence smarter than humans could pose a massive, existential threat to humanity.

What to do about AI is an international debate. In Europe, the EU AI Act, which is currently being phased in, imposes guidelines on AI-based systems based on their risk to individual privacy and safety. In the US, meanwhile, Congress recently proposed barring states from enforcing their own rules around AI for a decade, without a national framework in place, until backing off during last-minute negotiations around the big tax and spending bill. 

“I think in the end, there is a balance here between enjoying the innovation of AI and mitigating the risks that come with AI,” Alon Yamin, CEO of Copyleaks, which runs an AI-powered system for detecting AI-generated writing, told me. “If you’re going too far in one end, you will lose something. The situation now is that we’re very far to the direction of no regulation at all.”

Here’s a look at some of the issues raised around AI, how regulations might or might not address them and what it all means for you.

Watch this: $90 Billion AI Investments, MLB Robot Umpires and More | Tech Today

Different approaches, with an ocean in between

Listen to the debates in Congress about how to regulate artificial intelligence, and a refrain quickly becomes apparent: AI companies and many US politicians don’t want anything like the rules that exist in Europe.

The EU AI Act has become shorthand for a strict regulatory structure around AI. In brief, it requires companies to ensure their technology is safe, transparent and responsible. It sorts AI technologies into categories based on the level of risk. The highest-risk categories are either prohibited entirely (things like social scoring or manipulative technologies) or heavily restricted (things like biometrics and tools for hiring and law enforcement). Lower-risk technologies, like most of the work done by large language models we’re familiar with (ChatGPT, etc.), are subject to less scrutiny but still must meet certain transparency and privacy requirements.

A key feature of the EU’s standards and those in other places, like the United Kingdom, is transparency about the use of AI. 

“What these things are fundamentally saying is, we’re not trying to block the use of AI but giving consumers the right to opt into it or not or even to know it’s even there,” said Ben Colman, CEO of the identity verification company Reality Defender.

During a May hearing on AI regulation in the US Senate Commerce, Science and Transportation Committee, Sen. Ted Cruz referred to the EU’s standards as “stifling” and “heavy-handed.” Cruz, a Texas Republican, specifically objected to any kind of prior approval for AI technologies. He asked OpenAI CEO Sam Altman what effect similar rules would have on the industry in the US, and Altman said it would be “disastrous.” 

Earlier this month, Meta said it wouldn’t sign the EU’s Code of Practice for general-purpose AI, which is intended to provide a framework to help AI companies follow the regulations of the EU AI Act. In a post on LinkedIn, Joel Kaplan, Meta’s chief global affairs officer, called it an “over-reach” that “will throttle the development and deployment of frontier AI models in Europe.”

“Europe is heading down the wrong path on AI,” Kaplan said.

But regulations focused on high-risk systems like those used in hiring, health care and law enforcement might miss some of the more subtle ways AI can affect our lives. Think about the spread of AI-generated slop on social media or the creation of realistic-looking videos for political misinformation. Those are also social media issues, and the battle over regulation to minimize the harms with that technology may illuminate what could happen with AI.

Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts

Lessons from social media

After a South by Southwest panel in March on regulating AI, I asked Harvard Law School professor Lawrence Lessig, long a vocal observer of tech’s problems, what worried him most about AI. His response: “AI totally screwing up in the context of social media and making it so we have no coherence in our understanding of national politics.”

Social media has long been fraught with harmful social implications. The spread of misinformation and erosion of trust in the last decade or so are largely results of the growth of these networks. Generative AI, which can reinforce biases and produce believable but false content with ease, now poses those same problems. On top of those parallels, some of the companies and key figures in AI come straight from the world of social media technology, like Meta and Elon Musk’s X. 

“We’re seeing a lot of the same repeats of social media fights, of privacy fights where companies do whatever they want and do a sort of vague gesture of doing something about it,” said Ben Winters, director of AI and privacy at the Consumer Federation of America. 

There are some key differences between those fights and the ones around AI, Winters said. One is that lawmakers and regulators are familiar with the mistakes associated with social media and want to avoid repeating them. “I think we’re ahead of the curve in terms of response, but one thing that I really hope we can see at the federal level is a willingness to put some basic requirements on these companies,” he said.

At the May Senate committee hearing, OpenAI’s Altman said he’s also wary of repeating past mistakes. “We’re trying to learn the lessons of the previous generation,” he said. “That’s kind of the way it goes. People make mistakes and you do it better next time.”

What kinds of AI regulations are we talking about?

In my conversations with artificial intelligence experts and observers, some themes have emerged regarding the rules and regulations that could be implemented. They boil down, in the short term, to questions about the role of AI in impactful decision-making, misinformation, copyright and accountability. Other concerns, like the threat of “superintelligence” or the loss of jobs, also exist, although those are far more complicated.

High-risk systems

This is where the EU AI Act and many other international laws around artificial intelligence focus. In the US, it’s also at the center of Colorado’s AI law, which passed in 2024 and will be effective in 2026. The idea is that when AI tools are used to make important decisions, about things employment, health care or insurance, they are used in a way that minimizes discrimination and errors and maximizes transparency and accountability. 

AI and other predictive technologies can be used in a lot of different ways, whether by governments for programs like child protective services or by private entities for advertising and tracking, Anjana Susarla, a professor at Michigan State University, told me recently. 

“The question becomes, is this something where we need to monitor the risks of privacy, the risks of consumer profiling, should we monitor any kind of consumer harms or liabilities?” she said.

Misinformation

Gen AI has a well-documented history of making stuff up. And that’s if you’re using it in good faith. It can also be used to produce deepfakes — realistic-looking images and video intended to manipulate people into believing something untrue, changing the behavior of voters and undermining democracy. 

“Social media is the main instrument now for disinformation and hate speech,” said Shalom Lappin, a professor of computational linguistics at Queen Mary University of London and author of the new book Understanding the Artificial Intelligence Revolution: Between Catastrophe and Utopia. “AI is a major factor because much of this content is coming from artificial agents.”

Lies and rumors have spread since the dawn of communication, but generative AI tools like video and image generators can produce fabricated evidence more convincing than any past counterfeit, at tremendous speed and very little cost. On the internet today, too often you cannot, and should not, believe your own eyes.

It can be hard for people to see just how easy it is to fake something — and just how convincing those fakes can be. Colman, with Reality Defender, said seeing the possible problem is believing. “When we show somebody a good or a bad deepfake of them, they have that ‘a-ha’ moment of, ‘wow, this is happening, it can happen to me,'” he said.

Josh Hawley points to a posterboard with a quote "It's the piracy (and us knowing and being accomplices) that's the issue."

Sen. Josh Hawley, a Missouri Republican, points to a poster during a July 2025 hearing on artificial intelligence model training and copyright infringement.

Chip Somodevilla/Getty Images

Copyright

There are two copyright issues when it comes to generative AI. The first is the most well-documented: Did AI companies violate copyright laws by using vast amounts of information available on the internet and elsewhere without permission or compensation? That issue is working itself out in the courts, with mixed results so far, and will likely take much longer before something all-encompassing comes out of it.

“They’ve essentially used everything that’s available. It’s not only text, it’s images, photographs, charts, sound, audio files,” Lappin said. “The copyright violations are huge.”

But what about the copyright of content created by AI tools? Is it owned by the person who prompted it or by the company that produced the language model? What if the model produces content that copies or plagiarizes existing content without credit, or violates copyrights?

Accountability

The second copyright issue gets at the problem of accountability: What happens when an AI does something wrong, violates a law or hurts somebody?

On the content front, social media companies have long been protected behind a US legal standard, known colloquially as Section 230, that says they aren’t responsible for what their users do. But that’s a harder test for AI companies, because the user isn’t the one creating this content, the company’s language model is, Winters said.

Then there are actual, material harms that can come from the interactions people have with AI. A prominent example of this is mental health, where people using AI characters and chatbots as therapists have received bad advice, the kind that could cost a human provider their license or worse, the kind that resulted in self-harm or worse outcomes for the person involved. The issue is magnified even more when it comes to children, who likely have even less understanding of how they should treat what an AI says.

Who should regulate AI?

The question of whose job it is to regulate AI was at the heart of the congressional debate over the moratorium on state laws and rules. In that discussion, the question was whether, in the US, companies should have to navigate one set of rules passed by Congress or 50 or more sets of regulations implemented by the states.

AI companies and business groups said the creation of a “patchwork” of laws would hinder development. In a June letter to Senate leaders, Consumer Technology Association CEO and Vice Chair Gary Shapiro pointed to more than 1,000 state bills that had been introduced regarding AI in 2025 so far.

“This isn’t regulation — it’s chaos,” he wrote. 

But those bill introductions haven’t turned into an avalanche of laws on the books. “Despite the amount of interest from policymakers at the state level, there haven’t been a ton of AI-specific laws passed in the United States,” said Cobun Zweifel-Keegan, managing director, DC, for the privacy trade group IAPP.

States can experiment with new approaches. California can try one thing, Colorado another and Texas something entirely different. An approach that works will spread to other states and could lead to rules that protect consumers without stifling businesses.

But other experts say in the 21st century, companies with the size and scope of those pushing artificial intelligence can only truly be regulated at the international level. Lappin said he believes an appropriate venue is international trade agreements, which could keep companies from hiding some services in certain countries and having customers circumvent protections with VPNs.

“Because these are international rather than national concerns, it seems to me that without international constraints, the regulation will not be effective,” Lappin said.

What about superintelligence?

So far, we’ve mostly focused on the impact of the tech that is available today. But the biggest boosters of AI are always talking about how much smarter the next model will be and how soon we’ll get technology that exceeds human intelligence. 

Yes, that worries some folks. And they think regulation is important to ensure AI doesn’t view that explanation from Morpheus in The Matrix as an instruction manual for world domination. The Future of Life Institute has suggested a government agency with a view into the development of the most advanced AI models. And maybe an off switch, said Jason Van Beek, FLI’s chief government affairs officer. “You theoretically would not be able to control them at some point, so just trying to make sure there’s some technology that would allow these systems to be turned off if there’s some evidence of a loss of control of the situation,” he told me. 

Other experts were more skeptical that “artificial general intelligence” or superintelligence or anything like that was on the horizon. A survey earlier this year of AI experts found three-quarters doubted current large language models would scale up to AGI. 

“You’re getting a lot of hype over general intelligence and stuff like that, superintelligent agents taking over, and I don’t see a solid scientific or engineering basis for those fears,” Lappin said.

The fact is, human beings don’t need to wait for a genius-level robot to pose an existential threat. We’re more than capable of that ourselves. 

Should regulators worry about job losses?

One of those more immediate threats is the possibility that AI will cause mass layoffs as large numbers of jobs are replaced by AI or otherwise made redundant. That poses significant social challenges, especially in the United States, where many fundamentals of life, like health care, are still tied to having a job. 

Van Beek said FLI has suggested the US Department of Labor start keeping track of AI-related job losses. “That’s certainly a major concern about whether these frontier technologies are going to be taking over huge swaths of industries in terms of jobs or those kinds of things and affecting the economy in very, very deep ways,” he said.

There have been major technological innovations that have caused massive displacement or replacement of workers before. Think of the Industrial Revolution or the dawn of the computer age. But those often happened over decades or generations. AI could throw the economy into chaos over a matter of years, Lappin said. The Industrial Revolution also put industries out of work at varying times, but AI could hit every industry at once. “The direction is toward much, much more widespread automation across a very broad domain or range of professions,” he said. “And the faster that happens, the much more disruptive that will become.”

What matters most? Transparency and privacy

The first step, as with laws already passed in the EU, California and Colorado, is to provide some sort of visibility into how AI systems work and how they’re being used. For you, the consumer, the citizen, the person just trying to exist in the world, that transparency means you have a sense of how AI is being used when you interact with it. This could be transparency into how models operate and what went into training them. It could be understanding how models are being used to do things like decide who a company hires and fires.

Right now, that doesn’t really exist, and it definitely doesn’t exist in a way that’s easy for a person to understand. Winters suggested a system similar to that used by financial institutions to evaluate whether someone can get loans — the credit report. You have the right to inspect your credit report, see what has been said about you and ensure it’s right. “You have this number that is impactful about you; therefore, you have transparency and can seek corrections,” he said.

The other centerpiece of most proposals right now is privacy — protecting people against unauthorized recreations of themselves in AI, guarding against exploitation of personal information and identity. While some existing, technology-neutral privacy laws should be able to protect consumers, policymakers need to keep an eye on the changing ways AI is used to ensure they’re still doing the job.

“It has to be some kind of balance,” Susarla said. “We don’t want to stop innovation, but on the other hand we also need to recognize that there can be real consequences.”



Enlace fuente

LEAVE A REPLY

Please enter your comment!
Please enter your name here