If you work with AI tools in Estonia, Latvia, or Lithuania — whether you’re a developer integrating APIs, a business owner using ChatGPT for customer service, or just someone who uses Claude to draft emails — there are laws that affect you. Some are already in force. Others kick in on 2 August 2026. And almost nobody in the region is talking about them clearly.
This post breaks it all down: what’s already law, what’s coming, and what you actually need to do about it.
The Big Picture: Three Layers of Regulation
AI regulation in the Baltics comes from three sources:
Layer 1: The EU AI Act — a single regulation that applies directly in all three countries. No national transposition needed. This is the big one, and most of its provisions take effect on 2 August 2026.
Layer 2: National laws — each Baltic state has taken its own approach to AI legislation, ranging from comprehensive (Latvia) to deliberately minimal (Estonia).
Layer 3: GDPR — still the primary regulation affecting how AI tools process personal data. If you’re feeding customer data into an AI tool, GDPR applies regardless of anything else.
Let’s look at each country, then zoom out to the EU AI Act timeline that affects everyone.
Latvia: The Legislator
Latvia is the most active of the three Baltic states when it comes to AI-specific laws. It’s one of very few EU countries to have created dedicated AI legislation.
What’s already law:
The AI Centre Law (March 2025) established the Latvian Artificial Intelligence Centre — a foundation that coordinates AI policy, runs a regulatory sandbox for testing AI systems, and advises government agencies. If you’re building an AI product and want to test it in a controlled environment with regulatory guidance, this is your entry point.
Deepfake criminal liability (May 2024) — Latvia was among the first EU countries to criminalize the use of AI-generated deepfakes in elections. Creating or spreading intentionally false deepfake content about political candidates during campaigns carries a penalty of up to five years in prison. This is already in force.
AI disclosure in political advertising (2024) — if you use AI to generate political advertising content in Latvia, you must disclose it. Voters have a right to know when they’re looking at synthetic media.
Financial market AI regulation (September 2025) — a dedicated law governing AI use in financial services, covering governance, risk management, and compliance requirements. If you work in fintech or banking in Latvia, this applies now.
What’s unique about Latvia: It has a functioning AI regulatory sandbox (detailed rules published January 2026), 17 designated agencies for EU AI Act market surveillance, and the only standalone AI development law in the Baltics.
Estonia: The Innovator
Estonia took the opposite approach to Latvia: no standalone AI law, and none is planned. The government’s explicit position is that the EU AI Act provides sufficient horizontal regulation, and Estonia should build on that framework rather than create competing national legislation.
What exists:
Instead of broad AI laws, Estonia made targeted amendments to specific sector laws. The Taxation Act now allows the tax authority to issue automated decisions without human intervention. The Environmental Charges Act has a similar provision for the Environmental Board. These are narrow, practical changes — not sweeping regulation.
Where Estonia leads: Strategy and funding. The AI and Data Action Plan (known as “Kratt,” after a creature from Estonian folklore) allocates €85 million for AI development during 2024–2026 — the largest per-capita AI investment in the Baltics. The “AI Leap” programme is integrating AI literacy into secondary schools. Estonia is betting on education and innovation rather than regulation.
What’s unique about Estonia: The lightest regulatory touch in the Baltics, the biggest AI funding, and the only Baltic state with a dedicated secondary-school AI education programme.
Lithuania: The Implementer
Lithuania also lacks standalone AI legislation, but it has been the most proactive in preparing for EU AI Act enforcement. In January 2025, the Lithuanian Parliament designated both required competent authorities — making it one of the earliest EU member states to do so.
What exists:
The Innovation Agency serves as the notifying authority, handling conformity assessments of high-risk AI systems. The Communications Regulatory Authority (RRT) is the market surveillance authority — it monitors AI systems on the market and can investigate, enforce corrections, initiate recalls, or impose penalties.
An AI regulatory sandbox is being established at the Innovation Agency, and Lithuania is building the only AI Factory in the Baltic States, targeting full operation by 2027. The AI Factory will cover the entire innovation chain from idea to product, with priority areas including cybersecurity, personalized medicine, industrial automation, and energy.
What’s unique about Lithuania: The most advanced EU AI Act implementation, direct parliamentary oversight through a dedicated AI Working Group in the Seimas, and the region’s only upcoming AI Factory.
The EU AI Act: What Changes on 2 August 2026
Regardless of which Baltic country you’re in, the EU AI Act is the regulation that will affect the most people. Here’s the timeline:
Already in force (since February 2025):
Certain AI systems are outright banned across the EU:
- Social scoring systems that rate people based on behaviour
- AI that uses subliminal manipulation to distort decisions
- Untargeted scraping of facial images from the internet
- Emotion recognition systems in workplaces and schools
- Predictive policing based on personal profiling
If you’re building or using any of these, stop. It’s illegal now.
Already in force (since August 2025):
Providers of general-purpose AI models (think OpenAI with GPT, Anthropic with Claude, Google with Gemini) must comply with transparency and documentation requirements. This affects the model providers, not end users.
Coming on 2 August 2026 — the big one:
This is when most of the EU AI Act’s remaining provisions take effect:
- High-risk AI systems must meet strict requirements: risk management, data quality, documentation, human oversight, accuracy, robustness, and cybersecurity. High-risk categories include AI used in critical infrastructure, education, employment, healthcare, law enforcement, border management, and administration of justice.
- Transparency obligations begin. If you deploy a chatbot, you must tell users they’re interacting with AI. If you generate AI content (text, images, audio, video), it must be labelled as AI-generated.
- Each EU member state must have at least one AI regulatory sandbox operational.
- Enforcement begins at both national and EU level. Non-compliance can result in fines of up to €35 million or 7% of global annual turnover.
Coming August 2027:
Rules for high-risk AI systems embedded in regulated products (medical devices, vehicles, machinery, etc.) apply.
What This Means for You
If you use AI tools in your business
You’re probably fine for now, but prepare for August 2026. The main thing to watch: if you use a chatbot for customer service, you’ll need to disclose that it’s AI. If you use AI to generate marketing content, consider whether it needs labelling. GDPR still governs how you handle personal data through AI tools.
If you’re a developer building AI systems
Check the EU AI Act’s risk classification. If your system falls into a high-risk category, you’ll need conformity assessments, risk management documentation, human oversight mechanisms, and technical documentation — all before August 2026. Latvia’s AI sandbox and Lithuania’s Innovation Agency can help you navigate this.
If you work in finance in Latvia
The Financial Market Digital Resilience and AI Use Act is already in force. Review your AI governance and risk management practices now.
If you’re in politics or media
In Latvia, deepfake use in elections is already a criminal offense. Across all three countries, the EU AI Act’s transparency rules for AI-generated content kick in August 2026. Disclose, disclose, disclose.
If you’re a student or educator
The University of Latvia has already regulated AI use in academia (effective March 2026). Expect other Baltic universities to follow. Students must disclose AI use in their work, but AI detection tools alone cannot determine grades or prove misconduct.
The Bottom Line
The Baltics are approaching AI regulation from three different angles — Latvia through legislation, Estonia through funding and innovation, Lithuania through institutional readiness. But the common thread is the EU AI Act, which applies equally in all three countries.
The most important date to remember: 2 August 2026. That’s when the majority of the EU AI Act takes effect, including high-risk system requirements, transparency obligations, and enforcement.
If you’re using AI tools casually, you likely won’t be directly affected. If you’re building or deploying AI systems in a professional context, now is the time to understand where your systems fall in the risk classification and what compliance looks like.
This isn’t about fear — it’s about being prepared. The companies and professionals who understand these rules early will have a competitive advantage over those who scramble to comply at the last minute.
This post is part of AI Baltics’ mission to make AI practical for the Baltic region. Subscribe to get notified about new articles, upcoming webinars, and practical AI insights.
Have questions about how these regulations affect your specific situation? Get in touch at hello@aibaltics.com.
Sources and further reading:
- EU AI Act full text: https://eur-lex.europa.eu/eli/reg/2024/1689/oj
- EU AI Act timeline: https://artificialintelligenceact.eu/implementation-timeline/
- Latvia AI Centre Law: https://likumi.lv/ta/id/359339-maksliga-intelekta-centra-likums
- Latvia Criminal Law (Art. 90.1): https://likumi.lv/ta/id/88966-kriminallikums
- Estonia AI overview: https://regulations.ai/countries/Estonia
- Lithuania Ministry of Economy AI page: https://eimin.lrv.lt/en/sector-activities/digital-policy/artificial-intelligence/
