Parliament in Tokyo finished debate and approved a new artificial intelligence bill this week. The lower chamber endorsed the text in April and the upper chamber confirmed it on Wednesday after votes from the Liberal Democratic Party the Constitutional Democratic Party of Japan and Nippon Ishin no Kai.
Lawmakers said solid policies are needed so engineers can keep building powerful code while citizens stay safe from deepfakes, data leaks and copyright theft.
Prime Minister Shigeru Ishiba told a cabinet meeting that intellectual property and advanced technology keep Japanese companies ahead in world markets, a view that influenced this whole thing.
The increase of deepfake concerns helped the bill pass quickly, as cultural groups warned that synthetic media could damage elections and personal privacy if rules stayed vague.
What Powers Does The Act Give Officials?
The law creates a team made up of every cabinet minister who must write national guidance for safe artificial intelligence and update it when tools change.
When harm occurs ministries can open an inquiry, ask developers for technical records and publish the company name so users know who allowed the problem.
Parliament chose pressure on its reputation over extra fines, reasoning that existing criminal and copyright codes already punish fraud and privacy invasion.
More from Artificial Intelligence
- Experts Share: How Is AI Influencing Advertising?
- Reports Find That AI Skills Are Becoming Important In The Job Market
- What Is Google’s Vision AI, And What Is It Used For?
- What Cybersecurity Measures Are Available For AI?
- OpenAI Explores Developer Interest in Integrating ChatGPT Directly Into Apps
- How Reliable Are Artificial Intelligence Research Tools?
- The Rise of AI Recruiters & The Startups Leading The Charge
- What Is Generative Artificial Intelligence (AI)?
They also approved a written request for more strict control of pornographic deepfakes. Ministries must prepare filters and legal advice so platforms can remove such images quickly.
Officials will track manipulated news during election periods, and they may ask an operator to pause a service until safeguards are in place if public order is at risk.
Yearly reviews will measure progress and decide whether naming alone keeps harm in check or calls for stronger action.
How Will Japan Help Creatives With AI?
Tokyo released an intellectual property plan for the year 2025 on the same day. The document was made to lift Japan into the top 4 spots on the Global Innovation Index before 2035.
The plan predicts that anime, tourism and other cultural exports can add about ¥1 trillion to regional economies during the next decade. They also plan to recognise patent rights for programmers whose artificial intelligence later invents new gadgets, ending a grey zone that was usually a problem for engineers.
8 priority fields such as renewable energy, disaster response and smart transport will gain joint projects linking public budgets to private design work, while universities receive grants to train specialists who mix coding skill with subject knowledge.
Does The UK Have Similar Policies?
London chose consultation rather than a full cabinet group.
In December last year, the Department for Science Innovation and Technology issued a paper on copyright and artificial intelligence. The consultation opened that month and closed in Feb this year.
The paper outlines a text and data mining exception. Writers, musicians and publishers can mark their work with machine readable tags that block training unless a licence is signed. Developers may train on untagged material as long as they publish general summaries of their data sources.
3 ideas frame the proposal…
- artists should keep control and earn payment.
- research labs need lawful access to rich data.
- public trust rises when people know what information enters large models.
For now regulators work with existing copyright rules while legislators study the consultation responses.
How Will Wrongdoing Be Investigated?
In Japan, police and regulators plan close watch on services that create unlawful content or leak private files. They will collect system logs, interview engineers and trace training data until they know the cause.
Once facts are clear the cabinet can post the developer name on an official site and share findings with courts. Victims then gain a clear target for civil claims and other firms learn a lesson without extra penalties.