AI, Machine Learning & Big Data Laws and Regulations 2022

Trends

Although Italy was the first European country to be hit hard by the COVID-19 pandemic, its economy “only” shrank by 8.9% in 2020, less than most of its southern European neighbours.  In fact, economic catastrophe was averted by the strength of the Italian industrial fabric, with factories staying open during most of the strictest lockdown periods.

Indeed, in 2021, Italy registered one of the best GDP growth rates in the Euro Area.  This was due partly to the successful rollout of the COVID-19 vaccination programme and of the vaccination mandates, which allowed the country to keep its shops and businesses open, unlike other European countries which had no choice but to adopt strict lockdowns.  Italy’s good economic results were also due to the successful deployment of the first tranches of the European Recovery Fund.  In this context, it is noteworthy that Italy is planning on using part of such funds to roll out its Strategic Programme on Artificial Intelligence, which was approved by the Italian Government on 24 November 2021.  The Strategic Programme is aimed at boosting AI research in Italy by promoting its general understanding and appeal to younger generations, with the final goal of making Italy an important AI hub.  Of course, the AI that Italy is seeking to promote has all the characteristics that the EU has been clarifying over the past few years, i.e. it is human-centred, trustworthy and sustainable and shall be deployed in all of the Country’s strategic sectors such as industry and manufacturing, the education system, agri-food, hospitality, health, infrastructure, etc.  AI is also considered a fundamental tenet of the modernisation of Italy’s public administration.

By adopting and rolling out the Strategic Programme, Italy is making a robust effort to catch up with some of its partners within the European Union, which have traditionally invested more on AI.  In fact, whilst over the past few years concern was growing that Italy’s industrial core was not being swift enough in adjusting to the artificial intelligence (“AI”) and robotics revolution, the COVID-19 crisis has truly been a litmus test for the Country’s industrial preparedness, and the outcome is surprisingly positive.

To fully appreciate where the development of AI solutions currently stands in Italy, it should be remembered that Italy’s entrepreneurial landscape is very different from that of its European neighbours.  In fact, most Italian businesses are small and medium-sized enterprises (“SMEs”) which successfully compete in the international arena thanks to their agility and technological capabilities.  Of course, the risk with SMEs is that they lack the necessary capital to adequately invest in research and development.  Indeed, the latest data show that Italy is sixth worldwide for the number of installed industrial robots, and that patent registrations relevant to AI-related inventions have decidedly picked up lately.  A survey has shown that Intelligent Data Processing, Natural Language Processing, Chatbots, Recommendation Systems and Intelligent Robotic Process Automation (“RPA”) account for the bulk of AI adoption in Italy.

The Strategic Programme emerges off the back of several previous efforts to boost AI.  In fact, in 2020, the Italian Government set up a group of experts tasked with setting out the AI strategy for Italy and ensuring that the positive adoption trend does not falter going forward.  The outcome of such an ambitious project was a report released in October 2020, which identifies the underlying principles upon which the Italian AI strategy should be built and the main areas on which government action or guidance should be focused, and makes several policy recommendations.  So, as for the industries where AI use should be boosted, the Italian AI Strategy Report (“IASR”) identifies manufacturing and the Internet of Things (“IoT”), finance, healthcare, transportation, food, energy and the defence sector.  The public sector should also play an important role in the implementation of the Italian AI strategy, on the one hand by making the great trove of data it collects available through the Open Data initiative, but also by increasingly using AI for its institutional tasks.

Whilst some of the recommendations appear immediately actionable, others may be interpreted as calling for excessive ex ante regulation, as we will see in the following sections.

Also, the urgency with which the IASR appears to be encouraging industrial SMEs to join forces and enter into Data Sharing Agreements to leverage their joint data resources does not seem to factor in the actual data scale necessary to effectively trigger the algorithmic leverage.

2. Ownership/protection 

Most recently, the discussions around the intellectual property implications of AI have centred on: (i) the opportunity to envisage new types of IP protection for AI algorithms; (ii) whether works created by AI could be granted IP protection; (iii) whether the training or deployment of AI may breach third-party IP rights; and (iv) whether AI inventions are eligible for patenting.

  1. Since no specific statutory protection is granted to algorithms, most commentators agree that AI should be protected by way of copyright.  However, since copyright protection can only be granted to the means by which an idea is expressed and not to the idea itself, algorithms can only be protected insomuch as the software that embeds them can qualify for protection.  This may not seem an adequate level of safeguarding for algorithms, particularly in light of the fact that software programs can be decompiled to allow the study of their internal workings.  However, since the patentability of AI, as that of any other software, would only be granted in the presence of technical character, copyright remains the most reliable form of protection.
    Of course, if we adopt a broader functional definition of AI where it is composed of both algorithms and the data-sets that are fed to it, then AI protection may be also be granted under articles 98 and 99 of the Industrial Property Code (Codice della Proprietà Industriale), which protect know-how.  In fact, provided the data-sets are kept secret (hence, such protection would not be actionable in the case of data-sets originating from cooperative or open source arrangements), they could be regarded as know-how.  Certain commentators argue than not only data-sets but also algorithms themselves could be protected as know-how.  Finally, data-sets may also be regarded as non-creative databases and, as such, be granted ad hoc protection as sui generis IP rights under the Copyright Statute (Legge sul Diritto d’Autore).  In this respect, although to date Italian Courts have not yet ruled on this matter, it seems fair to argue that rapidly changing data-sets may be regarded as databases which undergo a process of constant amendment and integration rather than a continuous flow of ever-new databases.  In fact, the latter approach would not allow for database protection.
  2. Whether or not works created by AI could be granted IP protection is not, as one may think, a futurist concern, but a very current one.  In fact, whereas as of the date of writing not many instances of AI-created artistic work have presented themselves which require adequate protection, the matter of whether data-sets originated by the workings of the IoT may qualify for IP protection has been brought to our attention.  In fact, although data-sets resulting from successive iterations within a series of IoT devices might, in theory, qualify for database protection, to date no statutes or case law have provided any clarity as to whom should be regarded as the right holder(s).
  3. Also, algorithms may be regarded as in breach of copyright if they are fed with copyright-protected work during the training stage.  In fact, depending on the task that the algorithm is required to perform, learning data may include visual art, music, newspaper articles or novels which are covered by copyright.  However, as long as such training data are not used to replicate the protected works, their use during the learning stage appears to be permitted.
  4. As for whether AI inventions are eligible for patenting, the European Patent Office DABUS decisions, by which it was ruled that only inventions where the stated inventor is a natural person are eligible for patent application, have – for the time being – discouraged any opinion to the contrary at national level.  On 21 December 2021, such decision was confirmed by the European Patent Office (“EPO”) Legal Board of Appeal.

In a context in which case law has not yet had the opportunity to validate most commentators’ theories on AI’s intellectual property implications, in 2019, Italian Administrative Courts had a chance to rule on the relationship between algorithmic transparency and intellectual property.  Such opportunity arose in relation to a case in which Italian state-school teachers disputed the procedure by which they had been assigned to their relevant schools.  In fact, since 2016, it has been an algorithm deciding which school teachers are assigned to, which is based on a number of set parameters – among which paramount importance is placed on seniority.  It soon emerged that a number of teachers were unsatisfied at being assigned to schools in remote regions, which in turn forced them to endure long daily commutes or even to relocate altogether.  When some teachers blamed the new algorithm and requested details of its internal workings, the Ministry of Education asked the software vendor which supplied the algorithm to prepare a brief explanation as to how the algorithm worked.  However, after examining the brief and finding it too generic, the teachers asked to be provided with the source code, and when the Ministry rejected the request, several teachers’ unions sued the Ministry before the Administrative Court (TAR Lazio). 

The ruling of TAR Lazio (CISL, UIL, SNALS v MUIR #3742 of 14 February 2017) shed some light on some very relevant legal implications resulting from the widespread use of AI algorithms in decision-making applications.  In fact, the Administrative Court ruled that an algorithm, if used to handle an administrative process which may have an impact on the rights or legitimate interests of individuals, is to be regarded as an administrative act by itself and, therefore, must be transparent and accessible by the interested parties.  The Court also ruled as to what constitutes transparency.  Attempts by the Ministry of Education to appease the objecting teachers by presenting them with the software vendor’s brief were not regarded by the Court as having been sufficient.  According to the Court, only full access to the source code allowed interested parties to verify the validity of the algorithm’s internal processes, the absence of bugs and, in general, the adherence of the algorithm to the criteria upon which the relevant decisions should have correctly been made (the Court, however, seemed to conflate the algorithm with the source code, but since the algorithm debated before TAR Lazio is not of a machine-learning nature, this did not seem to affect the Court’s reasoning on the specific transparency issue at stake).  As for the issue of the balance of IP protection and the teachers’ rights to algorithmic transparency, protection from the breach of IP rights to the algorithm was indeed raised as an objection by the Ministry of Education to the teachers’ request for sight of the source code, but the Court stated that it assumed the licensing agreement between the software vendor and the Ministry included adequate provisions to protect the vendor’s IP rights, and went on to say that even if such provisions had not been stipulated, that would not prevent an interested party’s access to the source code, as such party could only reproduce, and not commercially exploit, the source code.

Interestingly, it is again left to an Administrative Court to define AI, in the absence of a statutory definition.  In fact, the Italian Supreme Administrative Court, on 25 November 2021, ruled that whilst an algorithm is a “finite set of instructions, well defined and unambiguous, that can be mechanically performed to obtain a determined result”, AI is when“an algorithm includes machine-learning mechanisms and creates a system which not only executes the software and criteria (as in a “traditional” algorithm), but that constantly processes data inference criteria and takes efficient decisions based on such processing, according to an automatic learning mechanism”.  The definition is certainly not waterproof from a technical or legal standpoint, but it is still noticeable.

3. Antitrust/competition laws

Although the Italian Competition Authority (“AGCM”) has not yet taken any definitive stance on the impact that AI may have on competition, it has signalled that the issue is under consideration.  In fact, it appears that the main concern is that businesses which collect great amounts of data, such as, for example, search engines, social media and other platform businesses, may end up stifling competition by preventing competitors and new entrants from accessing such data.  The assumption behind this is that businesses are increasingly data-driven and may suffer detrimental financial consequences should they not be permitted to access the relevant data.  As a way to tackle this, it has been proposed that Big Data be regarded as an essential facility.  The application of the Essential Facility Doctrine (“EFD”) to AI would mean that dominant enterprises may be required to let competitors access the data-sets that they have collected in order to avoid being regarded as exploiting their dominant position.  In other words, the EFD would also apply to Big Data.  However, data can be easily and cheaply collected by new entrants and are by definition non-exclusive, insomuch as consumers can (and often do) disclose a similar set of data to different service providers as a consideration for the services that they benefit from.  It appears, therefore, that the EFD would only apply to Big Data to the extent to which the data at hand are, by their own nature or by the way their collection must be performed, difficult to gather or exclusive.

Since it appears that the EFD can only find application in particular cases where data cannot be easily collected or, for other reasons, are a scarce resource, it has been proposed that the risk of the creation of “data-opolies” be tackled by way of specific public policies aimed at incentivising data-sharing.

The joint report of the Italian Data Protection Authority (Garante per la Protezione dei Dati Personali), the Italian Electronic Communications Watchdog (Autorità per le Garanzie nelle Comunicazioni) and the Italian Fair Competition Authority (Garante della Concorrenza e del Mercato) (“FCA”) of 20 February 2020 appears to confirm such positions; however, at the same time cautioning that too stringent a data protection regime would prevent data-sharing, as a result creating entry barriers and hampering competition.  However, the joint report implies that the GDPR has so far showed sufficient flexibility, among other things introducing the right to data portability, which facilitates data re-usage.

Of course, data-sharing policies will have to be structured in such a way as to incentivise the sharing of those data which are necessary to secure fair competition, while preventing the sharing of information aimed at such unfair practices as price fixing.  Unlawful information-sharing practices may also be implemented by way of the deployment of ad hoc AI tools; for example, with a view to enforcing unlawful cartels.  In fact, algorithms may be used to monitor the competition’s prices in real time and enforce cartel discipline.  In this case, the Competition Authorities will have to assess whether swift price adjustments, or the adjustment of relevant commercial practices within a relevant market, are the result of the deployment of unilateral pricing algorithms (which is, per se, permitted) or a case of enforcement of cartel discipline, which must be swiftly sanctioned.

The FCA is also in charge of enforcing certain consumers’ rights.  In this context, the FCA sanctioned Facebook for having misled potential service subscribers by stating on its website that Facebook was going to be “free forever”.  In fact, the FCA found that such statement was misleading as under its current business model Facebook does monetise customers’ data and that potential subscribers should have been duly informed.  Such decision appears to have resulted in a general obligation for digital platform businesses to disclose to potential customers and subscribers how they monetise their data.

Quite notably, the IASR appears to be trying to revive the “Data as Essential Facility Doctrine”, but only with regard to data gathered by IoS and Industry 4.0 solution providers in compliance with the relevant solutions’ purchase or licensing agreements.  It appears, therefore, that the IASR is not advocating regarding consumer data as an essential facility.

4. Board of directors/governance 

Company Directors are under the obligation to perform their duties with diligence and appropriate technical skills.  Pursuant to article 2086 of the Civil Code, Company Directors must set up an organisational, administrative, and financial corporate organisation adequate to the relevant business’s size and characteristics, also with a view to providing timely warning of the company’s financial conditions and detecting possible upcoming insolvency.  Under article 2381 of the Civil Code, the Board of Directors – which may include both executive and non-executive Directors – must jointly assess the corporate organisation as it was set up by the executive Directors.  In this context, Company Directors are increasingly expected to make use of AI to ensure that such structure is adequate, both by acquiring sufficient familiarity with AI and by ensuring that the Company’s Chief Information Officer, Chief Data Officer and Chief Technical Officer are regularly consulted or even appointed as Board members.  This is in order to ensure that adequate AI tools are employed as foundations of the corporate organisation.

Of course, if a Company deploys AI in its commercial offerings, the CIO’s, CDO’s and CTO’s support, provided other corporate functions’ such as the General Counsel and the DPO, will also be required to ensure that the BoD can verify not only the adequateness of the Company’s organisation, but also all products’ and services’ general compliance.

In Italy, companies are liable for certain crimes committed by their top-level or, in certain circumstances, mid-level managers on behalf or in the interest of their employer.  In order for companies to avoid liability, they need to prove to have adopted an ad hoc compliance programme and to have enforced its compliance, including by way of appointing a supervisory body (Organismo di Vigilanza or “OdV”).  In particular, in order to be exempt from liability, businesses need to provide adequate evidence that they have put in place a set of appropriate internal procedures, and that the relevant managers could only commit the relevant crimes by eluding such procedures.

Initially, the crimes for which employers might be liable were bribery-related, but over time other crimes were added, such as network and digital-device hacking, manslaughter, etc.  The required internal procedures typically span over a number of business functions such as finance, procurement, HR, etc.  As many such procedures are increasingly AI-based (e.g. in recruitment processes initial CV screening is often carried out by way of an AI tool, potential suppliers’ track-records are assessed algorithmically, etc.), the OdV will need to include individuals with adequate expertise to assess whether the deployed AI conforms to the applicable legislation and, if not, act swiftly to remedy the situation.

Recently, some legal commentators have argued that since Company Directors are under the obligation to make their decisions based on adequate information, such obligation may include an implicit obligation to act based upon AI-based decision-support tools.  For example, when the Board of Directors is convened to decide whether the company should enter into a certain long-term contractual commitment with a third party, such third party’s credit score becomes of paramount importance, and the Directors may be liable vis-à-vis shareholders and creditors if it were proved that their decision was based on a credit score determined by using weaker methods than state-of-the-art AI.

5. Regulations/government intervention

No specific legislation has been adopted as regards AI.  The consensus seems to be that the current statutes are sufficient to tackle the challenges that AI is bringing to businesses and households.

This approach appears sensible, as an adjustable judicial interpretation of the current statutes should be preferred to the introduction of ad hoc sector-specific regulation, which may prove too rigid to apply to the ever-changing characteristics of AI.

So, for example, it has been considered that the liability for damage caused by AI-enhanced medical devices should fall within the field of application of the standard product liability regime; algorithms monitoring personnel in the workplace (e.g. in fulfilment centres, supply chains, etc.) should comply with the specific legislation on staff monitoring (article 4 of law 300 of 1970) and with the employer’s general obligation to safeguard the staff’s physical and psychological health (article 2087 of the Civil Code), etc.  Even when a lively debate erupted a few years back on the legal implications of autonomous vehicles, most commentators seemed to believe that current tort statutes would suffice to regulate such a new phenomenon.

Over the next few years, as AI will become increasingly pervasive and disrupt industries and habits to an extent not easily conceivable at the time of writing, it will probably be necessary to adopt ad hoc legislation.  However, the IASR appears to have adopted a different approach, as it highlighted the need for AI-specific legislation.  For example, among other things, the IASR appears to recommend that commercial agreements having AI solutions as objects should be forced to include statutory standard contractual clauses.

Finally, it should be noted that in Italy employers can monitor their staff by way of the “tools” that the staff use to carry out their duties.  Employment Courts have recently clarified that, in the case of digital devices, each single app downloaded on the device must be considered a stand-alone tool and can only be used by the employer for monitoring purposes if they are instrumental to the performance of work duties.

6. Civil liability

Although case law has not yet had the opportunity to rule on the liability regime of AI, in literature the opinion that the deployment of AI tools should be regarded as a dangerous activity seems widely accepted.  Therefore, according to article 2050 of the Civil Code, businesses deploying AI solutions would be considered responsible for the possible damage that such solutions may cause, unless they prove that they have put in place all possible measures to prevent the cause of such damage.  However, some commentators have observed that businesses deploying AI solutions may not even be in a position to adopt damage-mitigating measures, as algorithm providers do not allow access to the algorithm’s internal workings.  It has therefore been opined that AI solution providers should be held liable for damage caused by algorithms.  On the other hand, others have stressed that regarding any AI deployment as a dangerous activity does not seem fair and would deter the widespread adoption of AI vis-à-vis other countries with less draconian liability regimes.  However, such concern has been countered by the observation that, as the potential damage brought by widespread AI adoption has not been fully assessed yet, the EU Precautionary Principle should apply, which would open the floodgates to regarding AI as a dangerous activity and to the application of article 2050, at least for the time being.  The notion that AI should be regarded as a “dangerous activity” is also promoted by the IASR authors, who also suggest adjusting the liability regime of AI developers and marketers to that of animal owners.  However, other commentators have been reluctant to extend the “animal intelligence” liability regime to AI.

The role of “AI Agents” in the context of IoT platforms has also been widely discussed.  For example, in which capacity do AI Agents operate when placing an order as a result of their sensors detecting that a quantity/level of certain goods has decreased below a certain point.  Such agents cannot be regarded as representatives, as a representative must be legally capable, therefore some commentators have argued that AI Agents could be subject to the same very limited legal representation regime as slaves used to be subject to in ancient Rome.  It is hard to assess whether such creative legal thinking will be backed up by the Courts; however, these attempts to come to terms with AI Agents must be read in the context of a wider debate as to whether the advent of AI warrants the adoption of ad hoc legislation or not.

In fact, whereas some observers claim that the disruption brought by AI calls for the adoption of ad hoc regulation, others point out that such ad hoc measures would necessarily be too specific and risk being already behind the AI-development curve by the time they become effective.  Such observers opine that the broad-based Civil Code provisions on tort and contractual liability would better adjust to the ever-changing AI technical landscape and use cases.

7. Criminal issues

Predictive policing and crime prevention

Over the last few years, Italy has consistently been adopting AI solutions for crime-prevention purposes.  Crime-prevention algorithms have been licensed to law enforcement agencies in a number of medium to large cities, including Milan, Trento and Prato.  Such AI deployment has been a complex exercise, since in Italy, four different police forces (i.e. Polizia di Stato, Carabinieri, Guardia di Finanza and Polizia Locale) carry out sometimes overlapping tasks and only share certain databases.

Integrating data coming from such a variety of sources may prejudice data quality, leading to unacceptably biased outcomes.  Moreover, data collection at a local level may be patchy or unreliable if carried out with low-quality or unreliable methods.  In fact, typically, local law enforcement agencies rely on ad hoc budgets set by cities, municipalities or local police districts.  Therefore, poorer areas affected by severe budget constraints may have to rely on outdated Big Data systems or algorithms, giving rise to unreliable data-sets which, if integrated at a higher state level, may corrupt the entire prediction algorithm.  Biased data-sets may also derive from historical data which are tainted by long-standing police discriminatory behaviours towards racial or religious minorities.

Wouldn’t it be great if the police could know in advance who might be committing a crime or be the victim of a crime?  While many believe this is already possible thanks to the latest predictive policing AI tools, critics fear that such tools might be riddled with old-fashioned racial bias and lack of transparency.

Predictive policing may, then, cause resentment in communities of colour or communities mostly inhabited by religious or cultural minorities.  Such resentment may grow to perilously high levels unless the logic embedded in the relevant algorithms is understood by citizens.  However, transparency may not be possible, either due to the proprietary nature of algorithms (which are typically developed by for-profit organisations) or because machine-learning algorithms allow for limited explicability.  Therefore, it has been suggested that accountability may replace transparency as a means to appease concerned communities.  So far, Italian law enforcement agencies have been cautious in releasing any data or information as regards the crime-prevention algorithms.

Predictive justice

In Italy, as in other Countries, AI-based or AI-enhanced proceedings have sometimes been considered a possible step forward towards more unbiased criminal justice.  However, at the time of writing there are still (too) many issues preventing the swift entering of algorithms in criminal justice; the main obstacle being everyone’s right to be sentenced by way of a motivated legal decision, which right would be breached by the black-box nature of most AI algorithms.  In fact, the internal workings of algorithms not only may be made obscure by algorithm vendors to protect their intellectual property, but in some cases might have evolved autonomously using machine-learning techniques, to an extent that not even the algorithm creator can grant access to its workings.

8. Discrimination and bias

In addition to what has been pointed out in relation to the use of AI for crime prevention, controversies have arisen as to the possible discriminatory consequences of the use of AI for human resources purposes.  In particular, the potential use of AI as a recruitment tool has led some commentators to argue that biased data-sets could lead to women or minorities being discriminated against.

Italy has of course implemented the EU anti-discrimination directives, and the use of discriminatory criteria by AI-enhanced recruiting tools would trigger the liability of both the recruiter and of the algorithm supplier.

Equally, should the recruiting algorithm be fed with biased, incorrect or outdated data, candidates who did not get the job could be entitled to compensation if they can prove that such data were used for recruiting purposes.

It appears less likely that algorithms would be used to single out personnel to be laid off in the context of rounds of redundancies.  In fact, the criteria by which redundant staff are picked out are typically agreed upon with the unions’ representatives; whereas in the absence of an agreement, certain statutory criteria would automatically apply.

On the contrary, algorithms could be used to carry out individual redundancies; for example, within management.  In fact, managers’ (Dirigenti) employment can be terminated at will (although the applicable national collective agreements provide for certain guarantees) and algorithms could be used to pick out the managers whose characteristics match certain AI-determined negative patterns.  However, the required granularity of the data-set for this specific task makes the use of AI still unlikely in the context of individual redundancies.

9. National security and military

The Italian military has traditionally been both a NATO pillar and instrumental to UN peace-keeping and peace-enforcing missions worldwide.

The Ministry of Defence has published a document detailing the latest AI-based solutions which have been adopted or are in the process of being assessed by the Italian armed forces.

In parallel, Leonardo S.p.A., an Italian-headquartered, state-co-owned multinational defence contractor, has increased its focus on AI applications on a number of fronts.  In fact, to this end, Leonardo has installed the Davinci-1, a “supercomputer” ranked among the 100 most powerful worldwide, at its Genoa (Italy) site.  The Davinci-1 will allow Leonardo to consolidate and boost its leadership in fields such as autonomous intelligent systems, high-performance computing, electrification of aeronautical platforms and quantum technologies.

The increased military focus on AI solutions has started to prompt early debates among legal scholars who, for the time being, appear to be focused on human AI and robotic enhancements and their potential constitutional impact.

Don’t Stop Here

More To Explore