ChatGPT e l’Italia: Storia, Utilizzi e Controversie Legali

Facebooktwitterredditpinterestlinkedinmail

(questo articolo è stato scritto col supporto di DeepSeek – 10/03/2025)

1. Storia di ChatGPT in Italia

  • Lancio globale (novembre 2022): ChatGPT è stato rilasciato da OpenAI il 30 novembre 2022, incluso il supporto alla lingua italiana fin dall’inizio.
    (Fonte: Blog ufficiale OpenAI)
  • Blocco temporaneo in Italia (marzo 2023): Il Garante per la Protezione dei Dati Personali ha disposto il blocco di ChatGPT il 31 marzo 2023, citando violazioni del GDPR (mancato consenso per il training, assenza di controlli sull’età degli utenti).
    (Fonte: Provvedimento Garante Privacy).
  • Riavvio (aprile 2023): OpenAI ha implementato modifiche (es.: politica sulla privacy trasparente, strumenti per opporsi al trattamento dati) e il servizio è stato riattivato il 28 aprile 2023.
    (Fonte: Comunicato Garante).

2. Discipline che utilizzano ChatGPT in Italia

  • Accademia:
    • Alcuni atenei (es.: Università di Bologna) hanno pubblicato linee guida per l’uso etico di ChatGPT nella ricerca, vietandolo per esami e tesi.
      (Fonte: Unibo Magazine)
    • Scuole: Il Ministero dell’Istruzione ha avviato un tavolo di lavoro per regolamentare l’uso dell’IA nelle classi (2023).
  • Professioni legali: Studio di Federico Gazzetta (Università di Torino) sull’uso di ChatGPT per redigere atti giuridici, con avvertenze su errori e bias.
    (Fonte: ResearchGate)
  • Giornalismo: L’agenzia ANSA ha testato ChatGPT per bozze di articoli, ma ne vieta l’uso per notizie ufficiali.
    (Fonte: ANSA)

3. Controversie legali e cause

  • Azione del Garante Privacy (2023): Oltre al blocco iniziale, il Garante ha richiesto a OpenAI:
    • Maggiore trasparenza sugli algoritmi.
    • Limitazioni al trattamento di dati sensibili (es.: sanitari).
      (Fonte: Garante Privacy)
  • Diritto d’autore: Nel 2023, Alberto Casagrande (avvocato) ha sollevato dubbi sull’uso di opere protette per il training di ChatGPT, chiedendo un intervento dell’AGCOM.
    (Fonte: Il Sole 24 Ore)
  • Segnalazioni utenti: Diverse segnalazioni al Codacons per errori in risposte mediche o giuridiche, ma nessuna causa formalizzata (2023).

4. Conclusioni e Prospettive

  • Sviluppi normativi: L’UE sta discutendo l’AI Act, che impatterà su ChatGPT in Italia (es.: obbligo di etichettare contenuti generati da IA).
  • Consigli per gli utenti: Verificare sempre le fonti e non affidarsi a ChatGPT per decisioni critiche (salute, legge).

Meta Description (per SEO)

“Scopri la storia di ChatGPT in Italia, gli utilizzi accademici e le controversie legali con il Garante Privacy. Fonti ufficiali e dati verificati.”


Fonti Primarie

  1. OpenAI (2022), ChatGPT: Optimizing Language Models for Dialogue.
  2. Garante Privacy (2023), Provvedimento su ChatGPT.
  3. Università di Bologna (2023), Linee guida per l’uso di ChatGPT.

 

Dott.ssa Luana Fierro

 

The Global Patchwork of AI Regulation: A Comparative Analysis of DeepSeek Blockades

Facebooktwitterredditpinterestlinkedinmail

Introduction
The rapid evolution of artificial intelligence (AI) has sparked a global debate about its governance, balancing innovation with ethical, legal, and societal risks. Among the AI platforms drawing regulatory scrutiny is DeepSeek, a Chinese-developed AI system lauded for its advanced capabilities in data analysis and natural language processing. However, its proliferation has encountered resistance, with several states imposing restrictions or outright bans. This article examines the legal and policy frameworks underpinning these blockades, offering a comparative perspective on how divergent regulatory philosophies shape the global AI landscape.

 

1. China: Sovereignty and Controlled Innovation
As DeepSeek’s country of origin, China’s approach reflects its broader strategy of *state-centric AI governance*. While the platform operates domestically, its international reach is constrained by China’s own regulatory exports. The 2021 *Data Security Law* and 2022 *Algorithmic Recommendations Management Provisions* mandate strict compliance with national security and socialist core values. Foreign access to Chinese AI tools is often limited by reciprocal data localization requirements and fears of extraterritorial data access. Paradoxically, China promotes AI innovation domestically while restricting cross-border data flows, creating a “walled garden” for its technologies.

2. The European Union: Privacy and Fundamental Rights
The EU has emerged as a leader in *rights-based AI regulation*. DeepSeek’s alleged data practices—particularly its opaque training data sources—clash with the General Data Protection Regulation (GDPR). Concerns about non-compliance with data minimization, purpose limitation, and user consent have led several EU member states to restrict DeepSeek’s accessibility. The upcoming *AI Act*, which categorizes AI systems by risk levels, may further complicate DeepSeek’s operations, as its general-purpose AI designation could trigger stringent transparency and accountability requirements.

3. The United States: National Security and Sectoral Fragmentation
U.S. restrictions on DeepSeek stem from national security anxieties rather than a unified regulatory framework. The Committee on Foreign Investment in the United States (CFIUS) has scrutinized partnerships involving DeepSeek, citing risks of data exploitation and ties to the Chinese government. Sector-specific bans, such as in defense or critical infrastructure, align with the U.S.’s ad hoc, sectoral regulatory model. Meanwhile, the lack of federal AI legislation creates ambiguity, leaving states like California to pioneer stricter rules on data transparency.

4. India and the Global South: Digital Sovereignty in Action

India’s 2021 IT Rules and push for data localization exemplify a growing trend among Global South nations to assert digital sovereignty. DeepSeek’s perceived opacity in handling Indian user data, coupled with geopolitical tensions, led to its inclusion in a 2023 list of restricted foreign apps. Similarly, countries like Vietnam and Indonesia have invoked cybersecurity laws to limit AI platforms that fail to establish local data centers or comply with content moderation mandates.

5. Authoritarian Regimes: Control Over Information Flows
States like Iran and Saudi Arabia have blocked DeepSeek under broader internet censorship regimes. Here, the rationale centers on information control: AI systems capable of generating or analyzing unrestricted content threaten state narratives. DeepSeek’s potential to bypass language-specific censorship tools has heightened these concerns, prompting preemptive bans.


Comparative Analysis: Divergent Philosophies, Common Threads

  • Legal Foundations:
    • Civil Law Systems (EU, China): Codified statutes prioritize state or individual rights.
    • Common Law Systems (U.S., India): Case law and regulatory agencies drive enforcement.
  • Motivations:
    • Security: U.S. and India emphasize geopolitical risks; China focuses on domestic stability.
    • Rights: The EU prioritizes privacy; authoritarian states suppress dissent.
    • Economic Sovereignty: Data localization laws in India and Vietnam aim to nurture domestic tech sectors.
  • Enforcement Mechanisms:
    • The EU employs centralized oversight (e.g., European Data Protection Board).
    • The U.S. relies on CFIUS and executive orders.
    • China combines legislative mandates with Communist Party oversight.

Implications for International Law and Governance
The fragmented regulatory landscape raises critical questions:

  • Jurisdictional Conflicts: Can states enforce AI regulations extraterritorially? GDPR-style fines for non-EU companies set a precedent, but compliance remains inconsistent.
  • Trade Tensions: Restrictions on AI tools may violate WTO agreements if deemed disproportionate trade barriers.
  • Ethical Fragmentation: Without harmonized standards, AI developers face conflicting demands, stifling global collaboration.

Conclusion: Toward a Coherent Framework?
The blockade of DeepSeek underscores a broader dilemma: how to regulate borderless technologies within sovereign legal systems. While the EU’s risk-based model and China’s state-control approach represent opposing poles, middle-ground solutions are emerging. Initiatives like the OECD’s AI Principles and the Global Partnership on AI (GPAI) hint at potential convergence. Yet, as states prioritize sovereignty and security, the path to a unified regulatory regime remains fraught. For now, DeepSeek’s fate serves as a microcosm of the tensions defining 21st-century tech governance—a contest between innovation and control, played out on a fractured global stage.

Dott.ssa Luana Fierro

Artificial Intelligence and Sensitive Data

Facebooktwitterredditpinterestlinkedinmail

Artificial Intelligence and Sensitive Data: A Dialogue Between Innovation, Law, and Vulnerability

The advent of artificial intelligence (AI) applications has redefined the boundaries of technology, economics, and, not least, law. These systems, powered by unprecedented amounts of data, promise efficiency, personalization, and innovative solutions to complex problems, from healthcare to smart city management. However, at the core of their functionality lies a crucial paradox: to provide increasingly sophisticated services, AI needs to access, process, and store information that is often classified as “sensitive” – biometric data, political or religious preferences, health conditions, sexual orientation, and more.

This interplay between technological progress and privacy protection raises legal and ethical questions of global significance. On the one hand, national and supranational legislations attempt to curb the risks of misuse, leveraging tools such as the GDPR in Europe or the CCPA in the United States. On the other hand, the relentless dynamism of AI challenges the static nature of regulations, creating tensions between innovation, fundamental rights, and collective security.

The issue, however, is not merely regulatory. The collection and use of sensitive data by AI call into question the very notion of individual autonomy: to what extent are users aware of the implications of surrendering their data? How can transparency be ensured in increasingly opaque (“black box”) algorithms? And, above all, how can we balance commercial interests, social utility, and the protection of vulnerable minorities, who are often subjected to systemic discrimination perpetuated precisely by AI systems?

AI Systems and Personal Data: Regulatory Principles and Frameworks in the EU and the U.S.

The use of personal data by artificial intelligence (AI) systems raises critical questions about privacy, accountability, and ethical governance. Below, we analyze the core principles and legal frameworks governing this issue in the European Union (EU) and the United States (U.S.), drawing on official sources.


I. Foundational Principles for AI and Personal Data

AI systems processing personal data must adhere to principles established in international and regional frameworks. Key principles include:

  1. Lawfulness, Fairness, and Transparency
    • Data processing must have a legal basis (e.g., consent, contractual necessity) and be transparent to users (GDPR, Art. 5(1)(a)).
    • AI decisions affecting individuals must be explainable (EU AI Act, Art. 13).
  2. Purpose Limitation
    • Data collected for specific purposes (e.g., fraud detection) cannot be repurposed without consent (GDPR, Art. 5(1)(b)).
  3. Data Minimization
    • Only data strictly necessary for the AI’s function should be collected (GDPR, Art. 5(1)(c); U.S. FTC Guidelines, 2023).
  4. Accuracy
    • Systems must ensure data accuracy and allow corrections (GDPR, Art. 5(1)(d); NIST AI Risk Management Framework).
  5. Storage Limitation
    • Data retention periods must be justified (GDPR, Art. 5(1)(e)).
  6. Integrity and Confidentiality
    • Robust security measures are required to prevent breaches (GDPR, Art. 5(1)(f); U.S. Health Insurance Portability and Accountability Act (HIPAA)).
  7. Accountability
    • Organizations must demonstrate compliance with the above principles (GDPR, Art. 5(2); U.S. Executive Order on Safe AI).

II. European Union: The GDPR and the AI Act

The EU’s approach combines strict data protection rules with emerging AI-specific regulations.

  1. General Data Protection Regulation (GDPR)
    • Scope: Applies to all entities processing EU residents’ data, regardless of location (Art. 3).
    • Key Provisions:
      • Consent: Must be explicit, informed, and revocable (Art. 7).
      • Automated Decision-Making: Individuals have the right to opt out of decisions made solely by AI (Art. 22).
      • Data Protection Impact Assessments (DPIAs): Required for high-risk AI systems (Art. 35).
    • SourceGDPR Full Text.
  2. EU AI Act (2024)
    • Risk-Based Classification: Prohibits AI systems posing “unacceptable risk” (e.g., social scoring) and imposes strict requirements on “high-risk” AI (e.g., hiring algorithms) (Art. 5).
    • Transparency Obligations: Users must be informed when interacting with AI (Art. 52).
    • SourceEU AI Act Provisional Agreement.
  3. Enforcement
    • Supervised by national Data Protection Authorities (DPAs) and the European Data Protection Supervisor (EDPS).
    • Penalties: Up to 4% of global turnover for GDPR violations (Art. 83).

III. United States: Sectoral Laws and Emerging Frameworks

The U.S. lacks a comprehensive federal data protection law but regulates AI through sectoral laws and voluntary guidelines.

  1. Existing Laws
    • Federal Trade Commission Act (FTC Act): Prohibits “unfair or deceptive practices,” including misuse of personal data by AI (Section 5).
    • California Consumer Privacy Act (CCPA): Grants Californians rights to access, delete, and opt out of the sale of their data (amended by CPRA, 2023).
    • Health Data: HIPAA regulates AI in healthcare, requiring anonymization and patient consent.
  2. Proposed Federal Legislation
    • Algorithmic Accountability Act (2023): Requires impact assessments for AI systems in housing, employment, and healthcare.
    • AI Bill of Rights (2022): Non-binding framework emphasizing:
      • Safe and effective systems.
      • Protection against algorithmic discrimination.
      • Transparency and explainability.
      • SourceWhite House AI Bill of Rights.
  3. Agency Guidelines
    • NIST AI Risk Management Framework (2023): Voluntary standards for managing AI risks, including data privacy.
    • FTC Enforcement: Recent actions against companies like Amazon and OpenAI for data misuse.

IV. Comparative Analysis: EU vs. U.S.

Aspect European Union United States
Regulatory Approach Comprehensive (GDPR + AI Act) Sectoral + state-level laws
Consent Requirements Explicit and granular (GDPR) Varies by state (e.g., CCPA)
AI Transparency Mandatory (AI Act) Voluntary (NIST Framework)
Enforcement Centralized (DPAs) + heavy fines FTC litigation + state-level penalties
Focus Fundamental rights and prevention Innovation + mitigating harm ex-post

V. Challenges and Criticisms

  • EU: Overly restrictive rules may stifle innovation (e.g., AI Act’s “high-risk” classification).
  • U.S.: Fragmented laws create compliance complexity (e.g., CCPA vs. Virginia’s CDPA).
  • Global Tensions: Cross-border data transfers (e.g., EU-U.S. Data Privacy Framework) remain contentious.

Suggested Citations for Legal Texts:

  • EU GDPR: Regulation (EU) 2016/679.
  • EU AI Act: COM/2021/206 final.
  • U.S. AI Bill of Rights: White House Office of Science and Technology Policy (2022).
Dott.ssa Luana Fierro
Facebooktwitterredditpinterestlinkedinmail

Le prime definizioni di privacy

Con questo articolo inizieremo ad analizzare l’evoluzione delle definizioni di privacy, esaminando le prime concezioni del concetto e tracciando la loro trasformazione fino alle definizioni più attuali. In particolare, si esplora il percorso intrapreso dal concetto di tutela della privacy nel contesto tecnologico e digitale. Attraverso una revisione esaustiva delle fonti, vedremo come la comprensione di privacy si sia evoluta nel corso del tempo, evidenziando i contributi chiave che hanno plasmato la definizione moderna del termine.

Definizioni iniziali di Privacy

La concezione moderna del termine privacy ha inizio nel XVIII secolo con il concetto di “diritto all’isolamento” proposto da Samuel Warren e Louis Brandeis nel loro celebre articolo del 1890, “The Right to Privacy.” In questa prospettiva, la privacy veniva intesa come il diritto di essere lasciati soli, senza interferenze indesiderate da parte di terzi.

L’influente articolo di Louis Brandeis e Samuel Warren, “The Right to Privacy“, fu pubblicato sulla Harvard Law Review nel 1890. Questo articolo è considerato una pietra miliare nello sviluppo della legge sulla privacy negli Stati Uniti e ha avuto un impatto duraturo sul diritto legale e dulla comprensione del diritto alla privacy.

Brandeis e Warren hanno scritto l’articolo per analizzare i rapidi progressi della tecnologia e la crescente intrusione nella vita privata delle persone. Sostenevano che la legge avrebbe dovuto riconoscere e proteggere il diritto dell’individuo a essere lasciato solo e a godere della solitudine senza interferenze ingiustificate.

Nel “The Right to Privacy” furono approfonditi i seguenti concetti:

  • Il concetto di privacy come diritto:

Brandeis e Warren hanno articolato l’idea che il diritto alla privacy è un diritto fondamentale inerente al concetto di libertà. Sostenevano che gli individui dovrebbero avere la libertà di controllare le proprie informazioni personali ed essere protetti da intrusioni ingiustificate.

  • Protezione contro pettegolezzi e sensazionalismo:

Gli autori hanno espresso preoccupazione per l’emergente giornalismo scandalistico e il sensazionalismo, che stavano invadendo la vita privata degli individui per l’intrattenimento pubblico. Hanno sottolineato la necessità di tutele legali contro la pubblicazione di dettagli privati, spesso osceni, sulla vita delle persone senza il loro consenso.

  • Tecnologia e invasione della privacy:

L’articolo discuteva di come i progressi tecnologici, in particolare nel campo della fotografia e del giornalismo, stessero contribuendo all’erosione della privacy. Gli autori prospettavano il potenziale danno derivante dall’abuso e sostenevano l’adozione di misure legali per frenare l’uso improprio della tecnologia nella violazione della privacy degli individui.

  • Diritto di essere lasciati soli:

Una frase su cui focalizzarsi dell’articolo è “il diritto di essere lasciati soli” –  “the right to be let alone“. Tale affermazione racchiude l’essenza della loro tesi: gli individui hanno il diritto di essere liberi da intrusioni indesiderate nei loro affari privati e nel loro spazio personale.

Nel breve periodo l’articolo non ha creato un’immediata rivoluzione giuridica, ma è stato fondamentale perché ha gettato le basi per lo sviluppo della legge sulla privacy negli Stati Uniti.

Nel corso del tempo, i principi delineati da Brandeis e Warren hanno influenzato le decisioni dei tribunali e le dottrine legali relative al diritto alla privacy. Ancora oggi le affermazioni presenti in “The Right to Privacy” sono considerate rilevanti nelle discussioni contemporanee sulla privacy digitale, sulla sorveglianza e sulle sfide poste dall’avanzamento delle tecnologie.

Dott.ssa Luana Fierro

Note

per visionare l’articolo “The Right to Privacy” aprire il seguente link: https://www.jstor.org/stable/1321160?seq=1

Più diffusamente sulla privacy nell’UE cliccare sul seguente link: https://digital-strategy.ec.europa.eu/it

Più diffusamente sul diritto dell’Unione Europea analizzato in questo sito v. https://www.webcomparativelaw.eu/law-of-the-european-union-2/