AI is not currently regulated under Turkish law; but when AI issues arise, it is possible to apply existing legal provisions by analogy. In this regard, question 9 discusses the positive rules in terms of liability that apply where the autonomous decisions and actions of AI result in damages. However, before examining these rules in detail, the legal status of AI should first be evaluated.
Thus far, several opinions on the legal status of AI have been presented in the legal doctrine. The most common is the property view, which states that AI should be considered as under the ownership of real and legal persons. Another view has also been proposed following the European Parliament's publication of the European Parliament Legal Affairs Commission Robotics Advisory Report, which suggests that a special status of electronic personality should be created to apply to AI. The final view presented in the legal doctrine is the legal personality view, whose supporters argue that there is a relationship between AI and the company that creates and/or manages it, and with the board of directors of that company.
Although several countries have been taking legal and regulatory steps in the AI space, this is a completely new concept for every country and Turkey is no exception. Both in Turkey and elsewhere in the world, these initiatives have been launched in response to technological developments.
In Turkey, the government attaches great importance to AI applications and aims to promote the production and use of AI technologies, as stated in government strategies and policies. Thus, while AI remains unregulated, the evolution of both the background law and future regulations may be expected.
Under Turkish law, even where a party is not at fault, it may still be found liable for compensation under the regime of liability without fault. Under Articles 66 and following of the Turkish Code of Obligations (TCO), employers, animal keepers and building owners may be found liable for damages caused by an employee, an animal or defects in a building if they have not exercised all reasonable care to prevent the damage from arising.
The TCO also provides for the concept of liability on the grounds of equity in terms of a general duty to take reasonable care. Pursuant to Article 65 of the TCO, a court can order compensation for loss and damage caused partially or fully by a mentally incapable person on the grounds of equity. According to this provision, in order for a mentally incapable person to be held responsible for damages that he or she has caused, the action must be defective and against the objective law and have been committed personally by the mentally incapable person.
It may thus be possible to apply this liability regime by analogy to hold parties that use AI liable for compensation due to damages caused by the AI.
As discussed in question 1.3, under the concept of liability without fault, a party that uses AI technologies may be held liable for the actions of the AI under the applicable provisions of the TCO.
No, there is no AI regime in Turkey.
Turkey is not a party to any such bilateral or multilateral instruments. However, AI is closely related to cybersecurity, and Turkey transposed the Council of Europe Convention on Cybercrime into domestic law as of 1 January 2015 through the Agreement on Cyber Crimes. This will thus apply where relevant.
No specific authority is responsible for enforcing the applicable laws and regulations. However, given that the 2023 Industry and Technology Strategy provides for the establishment of an Artificial Intelligence Institute within the Scientific and Technological Research Council of Turkey (TUBITAK) (please see question 8.1), we expect that initiatives in this regard will be taken by the Ministry of Industry and Technology together with TUBITAK. The duties of the ministry include:
The Department of Big Data and Artificial Intelligence Applications under the Presidential Digital Transformation Office has also been tasked with developing strategies and facilitating coordination within the scope of the policies determined by the president to ensure the efficient use of big data and AI applications in the public sector.
There are no specific regulations governing AI in Turkey and no legislative preparations for such regulations have as yet been made. While the legal issues associated with AI are currently resolved through general rules under the TCO, strategic plans and programmes have focused on developing AI and setting certain standards, such as in relation to the management, protection and dissemination of data. Thus, while standards may be introduced on practice-oriented issues, no legislative initiatives are expected in the near future.
However, the Turkish legislature also monitors the work of the European Commission and may use this as a model for the preparation of legislation. Hence, the approaches of EU member states towards the regulation of AI may directly affect how legal disputes are interpreted in future. Moreover, the recent EU white paper entitled Artificial Intelligence: A European Approach to Excellence and Trust and report entitled Policy and Investment Recommendations for Trustworthy Artificial Intelligence may shape future policies regarding AI.
In addition to the above, as it is known, the Council of Europe (Ad hoc Committee on Artificial Intelligence) ("CAHAI") is currently preparing a convention that aims to regulate artificial intelligence on the basis of human rights, rule of law and democracy. In addition, various policy documents and impact analysis templates are prepared for the implementation of the convention. In this regard, Turkey takes an active role in these works and is at the forefront among the countries working in preparation of the convention. As the Council of Europe's works aim alignment with the European Union's efforts in the same area, they are expected to facilitate Turkey's harmonization with the EU through the outputs created. Therefore, Turkey's potential regulations are expected to be compatible with the CAHAI and EU.
The AI applications that have become most embedded in Turkey are primarily in the education, defence, health, finance, retail, business and agriculture sectors. Turkey has no official AI strategy as yet. However, workshops on AI and AI strategies are taking place, such as the Turkish Asian Strategic Research Centre workshop held in February 2020. The workshop was initiated by the Turkish Asian Centre for Strategic Studies and other bodies under the Turkey-based, multi-programme BRAINS2 TURKEY initiative, and reported on the requirements for and the pathway towards an AI strategy.
Turkey currently benefits from AI applications in many ways. According to sectoral impact analysis, AI applications are commonly used, in order of impact, in the health, telecommunications, e-commerce, banking and business, media and retail sectors. One of the main reasons for this is that funding and research and development have tended to focus on these sectors. Thus, there are many companies, initiatives and start-ups working on AI applications in Turkey.
Finance, business and e-commerce implementations are some of the most common AI-based products and services in Turkey. According to the Turkey Artificial Intelligence Initiative's (TRAI) Turkey AI Initiative Enterprise Report, published in July 2020, initiatives are ongoing involving systems such as:
The TRAI also updated its Turkey Artificial Intelligence Initiative Map in July 2020 to reflect the presence of 134 new start-ups.
Examples of available products and services include the following:
AI companies are generally structured as limited liability start-ups that will serve their function in a given sector or business, or as technology companies that provide AI software.
AI technologies are expected to contribute $15.7 trillion to the global economy by 2030 and to row national economies by 26%. One of the ways in which AI companies are financed is through government funding and incentives. The European Union, on the other hand, has announced that it will invest $24 billion in AI research by 2020. Some European countries have also established national initiatives. For example, the French government has announced that it will invest $1.85 billion in funding AI research and initiatives.
The private sector also plays an active role in the AI field. There are several methods in which Turkish start-ups are funded, which include:
For instance:
The state is involved to a considerable extent in the uptake and development of AI, through policy initiatives such as the following:
AI is being treated as an asset that should be regulated. Sectoral impact analysis of commonly used AI applications in the health, telecommunications, e-commerce, banking and business, media and retail sectors is currently being carried out. These sectors use AI in a range of processes, including:
However, as yet these efforts have not extended to the preparation of a regulatory framework for AI. Moreover, these efforts do not focus on particular sectors, but rather seek to determine the positive effects of AI technology and applications in general, and to expand the use cases thereof.
While there is no sectoral divide in Turkey in terms of AI, certain localisation requirements in the Turkish regulatory environment will be applicable and may impact on AI technology. Although there is no framework regulation that governs data residency principles, certain sector-specific data localisation rules and obligations will apply to AI to the extent that it falls within such sectors. Moreover, in the absence of a framework regulation setting out procedures and principles regarding data residency or sovereignty, the Personal Data Protection Law 6698 – which transposed EU Directive 95/46/EC and its ancillary regulations into national law (collectively, ‘DPL') – constitutes the main regulatory framework that governs the hosting, processing and flow of data in and out of the country. While the DPL subjects the cross-border flow of data to a regime which is compatible with that of the European Union, the political challenges faced by the local authority have strengthened data sovereignty concerns, leading to the application of restrictive measures.
(a) Healthcare
The most specific regulation that regulates the processing of health data is the Regulation on Personal Health Data, enacted on 6 July 2019. However, the scope of this regulation is limited to operations of public and private bodies relating to processes and practices run by the Ministry of Health. Therefore, unless the relevant operation relates to the duties and authorities of the Ministry of Health, the main applicable legislative instrument is still the DPL.
With regards to the domestic and international transfer of health data, the Regulation on Personal Health Data refers to the procedures and rules set out under the DPL. However, the storage of health data abroad by public bodies is restricted (and in certain circumstances prohibited) by Article 1 of Circular Note 2019/12 on Information and Communication Security Measures, which states that: "Critical information and data such as population, health and communication registration information, and genetic and biometric data shall be stored domestically in a safe environment."
Certain registration and physical audit requirements may also be interpreted as limiting health-related AI applications within the country.
(b) Security and defence
As per the Law on the Security of the Defence Industry and the Regulation on the Security of the Defence Industry, certain information can be categorised as classified information. The sharing of classified information requires a person security certificate and a facility security certificate, which are issued only to those who work in the industry. This requirement may be interpreted as limiting the transfer of security data abroad, although not limiting security-related AI applications.
(c) Professional Services
The Judicial Reform Strategy of the Ministry of Justice states that studies on the use of AI in judicial proceedings is one such reform under consideration, in line with the standards and guidelines of the Council of Europe, as well as the principles of legal equality and security. If AI technology is implemented in judicial proceedings, this will transform the sector and may set a precedent for disruption in professional services.
(d) Public sector
As a public sector-specific regulation, Presidential Circular 2019/12 on Information and Communication Security Measures was published in Official Gazette 30823 dated 6 July2019. The circular sets out 21 security measures to be adopted by public entities. One of these imposes a blanket ban on hosting data outside of the Turkish territory. The circular also referred to a guide that would set out further details on the scope of these measures and clarify their implementation by the public authorities. The Presidential Guide on Information and Communication Security, which was duly published on 28 July 2020, applies to public IT officials, who are expected to conduct internal IT security checks and anyone in the IT industry that engages with public institutions. The guide relaxed the localisation requirement by introducing a new definition and limited the localisation requirement to critical information only. Although the definition of ‘critical information' is unclear and open to interpretation, the guide is seen as promising in terms of relaxing the restrictions and enabling certain services and operations of public authorities to be processed via AI solutions that are not hosted in Turkey.
(e) Other
Strict localisation requirements are also imposed in the Turkish financial sector. Since the economic crisis of 2001, the Banking Regulation and Supervision Agency has imposed data localisation requirements on most financial institutions, and primarily on banks. An information system localisation requirement was first included in the Abolished Regulation on Internal Systems of Banks and Evaluation Process for Efficiency of Internal Capital of 1 July 2012 and was preserved in the current Regulation on Information Systems of Banks and Electronic Banking Services. On-site audits were provided for in the earliest version of the Banking Law dated 19 October 2005, but the abolished Banking Law 4389 included no provisions in this regard. It is planned that payment service providers and e-money institutions will also be subject to localisation obligations. Accordingly, the Draft Regulation on Payment Services and Electronic Money Issues and Payment Service Providers and the Draft Communiqué on the Information Systems of Payment and Electronic Money Institutions and Data Sharing Services in the Payment Services of Payment Service Providers require that payment institutions and e-money institutions keep their primary and secondary systems within Turkey. The localisation requirements that dominate the financial sector should also be taken into consideration when developing AI applications in the sector.
The Personal Data Protection Law of Turkey (DPL) covers the processing of personal data by automated and non-automated means. AI companies and applications should comply with the general principles set out in the DPL, which provide as follows:
AI companies and applications should also comply with:
Specifically, they should be aware that data subjects have the right to object, where the processing of personal data exclusively through automated means has a direct result for the data subject, which is likely to occur in relation to AI practice. Lastly, as Turkey has transposed Convention 108 for the Protection of Individuals with regards to Automatic Processing of Personal Data into domestic law, the data protection regime set out under this agreement will also apply.
Additionally, the Personal Data Protection Authority has signed a protocol with Istanbul Technical University's AI and Data Science Application and Research Centre to conduct research into data protection, privacy and data security in AI practice.
Turkey has no dedicated cybersecurity legislation, but legislative bodies and regulatory authorities are currently working towards the establishment of a cybersecurity environment and legislation in this regard is under development. Accordingly, the provisions on cybersecurity are not set out in a single legislative instrument, but can rather be found scattered through separate sector-specific regulations.
The recently enacted Presidential Circular 2019/12 on Information and Communication Security Measures sets out extensive obligations that mainly apply to public bodies with respect to cybersecurity. In addition, the Communiqué on Procedures and Principles on the Establishment, Duties and Practices of Cyber Incident Response Centres 2013 sets out the obligations of cyber incident response centres to report and notify all cybersecurity breaches to the competent regulatory authorities as soon as the cyber breach occurs or the cyber threat is discovered.
In addition, criminal offences in the field of informatics are set out in the following provisions of the Criminal Code:
In 2017, the minister of transport, maritime affairs and communications announced the completion of work on the draft Cybersecurity Law, which will be a binding framework regulation for cybersecurity. While the private sector has welcomed this initiative, the regulation still has not been enacted by the legislature.
In terms of the development and uptake of AI, one critical issue to address is the possible detrimental effect of pricing algorithms on competition, in particular by facilitating collusive practices. Moreover, as in tort law, the determination of antitrust liability arising from the use of algorithms becomes harder, as emerging technological advancements are weakening the link between the algorithm and the human beings who use it; although most of the algorithms used today still operate under the instructions of humans and humans can thus be held responsible for anti-competitive behaviour.
The Turkish Competition Authority (TCA) has not yet published any studies on the effects of algorithms on competition or on the digital economy more generally, and no cases thus far have investigated algorithmic commercial behaviour. Nonetheless, TCA officials have shared their attitudes on regulation and enforcement in this space with the press and the public. Given that prices can be automatically altered by algorithms developed by undertakings that can access huge amounts of data which are not available in the physical markets, TCA officials have expressed concern that small adjustments to algorithms could result in significant changes to the competitive order of the market and have stated that the TCA can address these issues using its traditional enforcement methods. In this regard, while the TCA has not specifically focused on the AI systems used, it has initiated a comprehensive study of digital markets, which is expected to examine AI-related issues relating to competitive behaviour. The TCA is also tracking emerging trends in competition enforcement around the world, including in the work of the Organisation for Economic Co-operation and Development, EU bodies and the national authorities of member states; and is prepared to respond to any sudden policy or legislative shift in relation to the digital economy.
One of the main concerns about AI is that it will lead to a loss of jobs due to the deployment of AI applications in the workplace. To address this, Turkey is exploring ways to ensure that its workforce – in particular, younger workers – is trained to use AI technologies in the workplace without this increasing unemployment. Accordingly, the 11th Development Plan (2019–2023) seeks to prepare roadmaps for the development of new technologies, including AI, and the training of the qualified human capital needed for these technologies. In this regard, instead of resisting the adoption of AI, Turkey aims to train its population to better prepare for its use in the employment context. In this regard, the government has introduced its 1 Million Software Developers project, which aims to create employment opportunities by providing free certified online training in several areas, including AI, software/programming, cybersecurity, digital design, personal development, secure internet, telecommunications systems, regulation and orientation.
Meanwhile, the Turkish Labour Code requires employers to comply with the principle of equal treatment in relation to all employees and thus prohibits discrimination based on employees' language, race, sex, disability, political thought, philosophical beliefs, religion and sect or similar grounds. As certain AI-based tools used in recruitment and performance evaluation processes pose a risk to this principle, employers must ensure transparency for these tools. Additionally, as explained in question 4.1, employees and candidates have the right to object to the use of such tools if the results are the sole criterion on which assessments are made, both in recruitment and in performance evaluation.
AI is designed to conduct calculations that are beyond the capabilities of humans at impressive speeds. It is difficult to explain how AI works, and sometimes even the owners and developers of AI technologies do not understand exactly how they work. This is mainly because AI conducts its calculations in a black box that cannot be accessed by others; only the final product is presented to the public. This poses a threat to data integrity, as we can only trust that the AI is respecting the integrity of the data. If incorrect data is entered on a platform, on the other hand, this breaches the integrity of the data and will skew the results of the algorithm. The AI technology develops by itself through machine learning, meaning that humans cannot intervene in any actions that precede the creation of the invention or command. If there is a breach of data integrity and the AI builds on that breach, the results could be disappointing. This could prove a fatal flaw in sectors such as healthcare, where data about a patient's wellbeing could be wrongly entered, resulting in false diagnosis and treatment. There is no built-in protection against data manipulation. AI can be tricked into reading data incorrectly, which could result in fatal accidents.
Turkey is in the process of drafting a best practice guide in collaboration with the Scientific and Technological Research Council of Turkey (TUBITAK). The aim is to establish a bridge between the private sector and public policies. The practice areas that are currently under discussion include counter-terrorism efforts, education and the creation of an AI ecosystem. The draft is in its very early stages and was discussed at an Artificial Intelligence and Turkey panel held in 2019. Data protection and privacy, and the training of expert personnel, are the issues at the top of the priority list in relation to the draft. Many AI projects in Turkey are funded by the private sector, underlining the importance of private interest in the forthcoming guidance document.
The transformation of the legal system due to the COVID-19 pandemic is also shaping Turkey's AI landscape, with digital trials now being conducted across the country. This is another area that will be addressed in the guidance document.
The European Commission has also published Ethics Guidelines for Trustworthy AI, which sets out best practice guidelines. These set out seven key principles of trustworthy AI, accompanied by an assessment list. The list is followed by best practice suggestions for each of the seven principles. The guidelines are not necessarily exhaustive, but rather offer a broad idea of best practices and thus cannot be taken as definitive guidance.
As the head of the Digital Transformation Office of the Presidency of Turkey, Ali Taha Koç, has pointed out, in order to eliminate the ethical and privacy-related concerns associated with AI and to mitigate the adverse effects on society, AI practices should:
Accordingly, AI practices should take ethical values into consideration – for example, the obligation to create a sustainable, production-based environment.
While AI practices may vary depending on the context, the use case and the target users, certain technical requirements should always be met in order to guarantee the consistency of the results. In order for AI practices to meet the characteristics outlined in question 8.2, they should ensure robustness and verifiability – that is:
Companies should prepare their own policies to safeguard accountability and transparency, and embody these principles in their production processes as general principles. As it is easy for developers or stakeholders to forget about the accountability that is associated with the development of AI system, companies should ensure that their employees are regularly trained in these principles and general rules of personal data protection and privacy, as well as competition. Companies should also work on a ‘readable' format that can explain the nature of the systems created to the average person, while also preserving their trade secrets.
As explained in question 1, there is no special regime that governs contractual obligations in relation to autonomous decisions and actions of AI. Thus, in terms of breach of contract, establishing causation in relation to AI products may not be possible if the defect cannot be traced back to human error. This could result in nobody at all being held liable. In addition, if machines gain the ability to negotiate contractual terms and conclude contracts independently, the current legal framework may be insufficient to regulate this.
In order to mitigate these risks, stating them explicitly in the contract could be one option to establish causation.
As explained in question 1, there is no special responsibility regime that governs damages caused by the autonomous decisions and actions of AI; such issues must thus be resolved based on general principles of tort law. However, given the special nature of AI, not all of the associated problems can be properly addressed under the current framework, which results in various risks.
The first risk arises from proving fault and causality where damages have resulted from AI. In general, AI systems are not transparent or accountable, which makes it difficult to prove who is actually responsible for the behaviour that has caused damage, and thus both to claim damages and to defend against liability. Similarly, the lack of transparency and accountability may make it difficult to identify the responsible party, especially in cases where multiple parties are involved in producing the AI-based product. Although joint liability may solve this problem to some extent, the allocation of compensated claims among the parties may not be addressed.
To eliminate these risks, the most viable option under Turkish law may be to introduce a new strict liability regime that would hold AI producers and developers strictly responsible for defects and damages that may arise in relation to their AI system, without considering fault or negligence.
While the impact of bias will depend on the area in which the AI is used, AI systems are highly controversial in relation to discrimination, as the systems are generally not sufficiently transparent to explain the rationale behind the decision-making process in an understandable manner. However, given that AI applications are not commonly used in the Turkish public sector, concerns are generally limited to practices in the private sector. Accordingly, the most critical issues arise in relation to employment practices, where companies' efforts to automate the recruitment practice may contravene the principle of equal treatment of employees (see question 6.1). If the dataset on which AI is trained is itself biased on the grounds of language, race, sex, disability, political thought, philosophical belief, religion or sect or similar grounds, due to repetitive mistakes and bias already ingrained in society, this will likely result in discrimination against candidates, even though neither the candidate nor the employer may be aware of this.
While individuals have a right to object of the use of such tools pursuant to the Personal Data Protection Law of Turkey, in order to balance the drive towards automation with the mitigation of AI-related risks, AI best practices and ethical standards will play a crucial rule. Efforts to introduce standards and promote the use of tools that increase transparency and accountability should be accelerated and embodied in the National Artificial Intelligence Strategy.
Under the current legislative framework, activity in the AI space is encouraged and incentivised, rather than being restricted, given the limited number of AI-focused innovations that have made it to market. The AI-related policies that have been announced thus far are mentioned in the 11th Development Plan for 2018-2023. Article 346 of the plan states that it will support the development of industrial cloud services for priority sectors. In this regard, technology suppliers will be encouraged to build software and services that can be provided through an industrial cloud platform; and the use of this platform by businesses will be encouraged by funding for its development. Security is addressed in Article 473 of the plan, which states that policies in this regard will be determined by the National Strategy for the Development and Distribution of Technologies for Artificial Intelligence.
Several incentives are aimed at promoting innovation in the AI space. Accordingly, the Small and Medium Enterprises Development Organization of Turkey's 2019-2023 Strategic Plan aims to promote the dissemination of high technology by enhancing and accelerating the entrepreneurship of small and medium-sized enterprises.
The Ministry of Industry and Technology's 2023 Manufacturing and Technology Strategy recognises that traditional goods and services may soon be replaced by intelligent products and services, along with the spread of disruptive technologies such as IT. To expedite technological advancements in the fields of defence, aviation and space technology, efforts to promote private sector investment and productivity will continue. With regard to innovations such as autonomous vehicles, AI applications and smart weapons systems, strategies and roadmaps will be prepared. Partnerships with foreign companies will be established to facilitate rapid entry into emerging technology markets, such as AI and machine learning, robotics and the Internet of Things. If required, company or technology acquisition opportunities will be evaluated and adequate resources will be allocated.
Another incentive is the Ministry of Justice's Judicial Reform Strategy, which aims to conduct studies on the potential use of AI practices in the judiciary, in line with the standards and guidelines of the Council of Europe, and with the principle of security of legal guarantees.
In the meantime, the Ministry of the Treasury and Finance's New Economy Policy 2020-2022 has introduced initiatives to explore the economic benefits of big data sources. It has been announced that a Big Data and Artificial Intelligence Institute will be established.
Under the Turkish employment regime, the obligations of employers and employees are either:
The labour regulations also have a semi-mandatory nature, whereby:
Accordingly:
To attract talent, companies may avail of the benefits afforded by special zones, which are supported by the government. ‘Free zones' are designated areas in which special regulatory treatment applies to the operation of companies, in order to promote exports of goods and services. Free zones offer a more convenient and flexible business climate in order to increase production and exports for some industrial and commercial activities. To support high-tech products and R&D-based and high-added-value production, specialised free zones have also been established; and in terms of support and incentives, the IT sector is prioritised. The incentives and support available for companies and employees include:
A facilitated and special work authorisation procedure is also available for foreign employees, so that they are not subject to the general rules.
The 11th Development Plan (2019-2023) provides for the conduct of studies at a national scale to promote the production of domestic technologies in the AI field and the dissemination of such technologies throughout the Turkish economy. In this regard, the Digital Transformation Office and the Ministry of Industry and Technology have held a Workshop on National Artificial Intelligence Strategy, focused on transparency, privacy and data security, as well as AI opportunities, the effective use of resources and the transformation of the workforce. While we expect the strategy to be finalised in the next 12 months, the establishment of the Artificial Intelligence Institute – which will implement AI projects and produce information to be considered when setting policies and standards on issues such as the management, protection and dissemination of data – is one initiative envisaged in the strategy.
While no specific legislation on AI is expected, it is anticipated that a framework and criteria for data access and sharing and increasing the efficiency of the use of AI will be introduced. Cybersecurity legislation is also being prepared which may affect the adoption of AI technologies. The legislation is expected to introduce certain data localisation requirements and criteria, so AI models hosted outside Turkey may be restricted due to cybersecurity concerns and the government's efforts to keep Turkish data in Turkey. Lastly, as supporting AI technologies is high on the government's agenda, further incentives and prioritisation may be expected for companies that invest in this area.
Moreover, certain remarks made by Turkish government officials indicate that Turkey will prioritise the security of its national data and its AI algorithms. In this regard, it has also been stated that Turkey's AI strategy must be fair, transparent, reliable, accountable, value-based and dependent on national and ethical values, while also enhancing social welfare.
The Turkish government is insistent that Turkish data must remain in Turkey. International data transfers are almost impossible to conduct, which could hinder the functions of any AI tools that involve the Internet of Things, a cloud computing system or simple data transactions. Therefore, companies must consider the Personal Data Protection Law thoroughly before embarking on any activity that requires cross-border data transfers.
Turkey further does not recognise AI as the author of an invention in terms of a patent application. This approach in is line with that followed in the United Kingdom and the European Union, where the author must be a human person. The same principle applies with regard to copyright. This is something for AI companies to consider when they are entering the market, as they will need to shape their patent and copyright applications accordingly.
As there is no specific legislation governing AI in Turkey, companies must associate their AI programs with established legal entities before making any applications. Given that as yet there is no legislation in this regard, companies could make use of this opportunity and present highly persuasive arguments regarding their products to the courts.
As explained in question 11.2, many types of support and incentives are available for research and development (R&D) activities conducted in Turkey. In addition to tax exemptions and support for certain types of expenditure, there are programmes aimed at supporting the R&D activities of small and medium-sized enterprises (SMEs) established in Turkey. In this regard, the Small and Medium Industry Development Organization facilitates the commercialisation of R&D products through the following support programmes: