Where Are We? Using Scopus to Map the Literature at the Intersection Between Artificial Intelligence and Crime

Research on Artificial Intelligence (AI) applications has spread over many scientific disciplines. Scientists have tested the power of intelligent algorithms developed to predict (or learn from) natural, physical and social phenomena. This also applies to crime-related research problems. Nonetheless, studies that map the current state of the art at the intersection between AI and crime are lacking. What are the current research trends in terms of topics in this area? What is the structure of scientific collaboration when considering works investigating criminal issues using machine learning, deep learning and AI in general? What are the most active countries in this specific scientific sphere? Using data retrieved from Scopus database, this work quantitatively analyzes published works at the intersection between AI and crime employing network science to respond to these questions. Results show that researchers are mainly focusing on cyber-related criminal topics and that relevant themes such as algorithmic discrimination, fairness, and ethics are considerably overlooked. Furthermore, data highlight the extremely disconnected structure of co-authorship networks. Such disconnectedness may represent a substantial obstacle to a more solid community of scientists interested in these topics. Additionally, the graph of scientific collaboration indicates that countries that are more prone to engage in international partnerships are generally less central in the network. This means that scholars working in highly productive countries (e.g. the United States, China) tend to collaborate with researchers based in their same countries. Finally, current issues and future developments within this scientific area are also discussed.


Introduction
The last two decades have witnessed a growing interest of scholars coming from natural, physical and mathematical sciences in social science problems. Mathematical and statistical modeling have widespread across multiple disciplines that focus on the study of human beings and human societies and that have traditionally been marked by qualitative research. Besides economical sciences, which inherently deal with numerical quantities and are, therefore, traditionally more receptive in adopting quantitative approaches, mathematics and statistics have infiltrated many other disciplines falling under the broad category of "social sciences", including sociology, political science, and criminology [1,2,3,4,5,6,7,8]. While the wall of resistance against quantitative research was finally collapsing, opening new perspectives and posing new challenges to scientific inquiry, other fields were experiencing another revolution, potentially one of the most intriguing and fascinating ones in human history. The interplay between neuroscience, computer science, mathematics, and other satellite fields had given light to decisive progress in the formalization, development, and deployment of intelligent algorithms for solving different classes of problems [9,10]. Artificial Intelligence, through several approaches and hundreds of different algorithms, has since then increasingly become a central component of research in computers and computation and has acquired a critical role in several other fields. The capabilities of AI systems have been tested also in social science fields. Even in this case, such a process has started to contaminate criminology. Nevertheless, studies that investigate the extent to which AI has intersected research on crime do not exist. In spite of the relevant debates that have emerged regarding two areas of application of AI systems, namely criminal justice and policing, the literature lacks a mapping of the extant works that integrate intelligent algorithms and investigations of criminals and criminal behaviors. In light of these considerations, this work proposes to map the extant literature through a two-fold analysis. First, it will provide a descriptive analysis of the trends of publications from 2010 on, considering top AI and data science conferences and criminology journals. Second, it will perform a Systematic Literature Search using Scopus, a database containing over 69 million abstract and citation records of peer-reviewed literature. The aim is to shed light on existing trends and patterns in this growing and heterogeneous area of research and to reason about future likely pathways and directions. The article outlines as follows. The "Background" section will briefly describe criminology and its evolving nature, the success of AI and some major recent applications in social science areas and, finally, the ethical and societal debates on the use of intelligent systems in criminal justice settings and policing. The "Analytic Strategy" method will describe the search strategy and the methodological setup of the study. The "Analysis and Results" section will present and comment on the outcome of the two different analytical dimensions of the study, namely the analysis of current patterns in topics and themes of research related to AI and crime and the structure of individual-and country-level collaboration networks. Finally, in the last section, considerations derived from the empirical data will be drawn in the attempt to better picture this strand of research and to define its current issues and potential future pathways.

The Interdisciplinary and Evolving Nature of Criminology
Criminology is a discipline that has long benefited from the dialogue between different fields. Crimes and criminal behaviors have been studied from a manifold of perspectives during the last two centuries. These perspectives include medicine, psychology, biology, philosophy, law, sociology, economics, and political science. Given the polymorphous nature of crime and criminal phenomena, it is not a surprise that so many scientific fields have focused on questions such as "What is a crime?", "How should a man be punished after committing a crime?", "What makes a human being an offender?" or even "how can we reduce crime?". The waves of success of some disciplines over the course of history have been favored by the socio-policial context that, in many cases, has influenced the scientific debate decisively. This, for instance, has been the case of neurocriminology, who has suffered from the legacy of the theory of Cesare Lombroso, an Italian physician, that first founded the Italian school of positivist criminology [11]. Lombroso argued that the inherited nature of a criminal could have been identified by physical defects and traits. In spite of the different scientific trajectories of each of the subfields that broadly constitute the area of criminology, one process generally applies to most (if not all) of them: the increasing use of data to propose, test or support theories and, more broadly, the growing prevalence of quantitative research [12,13]. This process has been certainly favored by technological and scientific signs of progress made in the last fifty years in other scientific fields (e.g., the diffusion of personal computers), and has been facilitated by the interest of policy-and decision-makers in designing criminal strategies and counter-policies based on empirical evidence. Regardless of the specific topic being investigated, quantitative and statistical methods based on numerical data have gained success and fostered the interest of scholars that do not belong to the original fields of criminology. Hence, the rapid availability of data and information for measuring, mapping, explaining, predicting and forecasting crime has brought to criminology mathematicians, statisticians, physicists, and computer scientists. Statistical and mathematical models have been applied to a variety of research topics and problems. They have been fundamental, among the others, in the development of the field of life-course criminology [14,15,16,17,18,19], and in the study of the spatio-temporal concentration of crime [20,21,22,23,24,25,26,27]. Furthermore, they have been fundamental in unfolding mechanisms in the study of terrorism [28,29,30,31,7,32] and organized crime [33,34,35], and have been applied to shed light on features and patterns of specific crimes, either violent [36,37,38] or not [39,40]. Common methods include regression analyses, finite mixture models, network analysis, Markovian models, structural equation modeling, multilevel analysis and causal models [5].

The Spread of AI and its Application to Societal Issues
While the quantitative evolution of criminology is irrefutable, researchers have not yet scanned the scientific production that employs artificial intelligence (AI) to investigate crime-related problems. In the last ten years, due to the combined effect of several events and phenomena related to the study of artificial intelligence and statistical learning, the success of algorithms designed to learn existing patterns in data without being explicitly programmed to do so has been enormous. Research on the concept and potential development of machines that can think and acquire human capabilities is not new. It dates back to philosophical matters proposed more than two thousand years ago and its present nature has been mainly shaped by the seminal works of Alan Turing and John von Neumann in the first half of the twentieth century, and Claude Shannon, Marvin Minsky, John McCarthy, Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert Simon in the Fifties and Sixties [10]. Artificial intelligence relies on legacies of mathematical constructions and techniques that are centuries old. However, in the last fifteen years, especially due to the breakthrough in the use of artificial neural networks, artificial intelligence has gained unprecedented attention and success beyond the borders of academia. The AI landscape in terms of approaches and methods is extremely complex and continuously evolving. Nonetheless, expressions such as "machine learning" and "deep learning" have become popular also to non-specialists and non-academics. On one hand, Machine learning refers to the capability of an algorithm to acquire its own knowledge via the extraction of patterns and schemes from existing data [41]. This capability, that is usually described more broadly as an approach, generally distinguishes between supervised and unsupervised learning. Supervised learning is the capability of an algorithm to learn correct input-output representations from training data. Under this category fall algorithms designed for classification or regression tasks. Unsupervised learning, instead, relates to the goal of an algorithm to correctly represent the structure or distribution of data in the absence of groundtruth output [42]. On the other hand, Deep Learning is instead an approach that makes it feasible for a computer to learn complex representations out of simpler concepts. These representations are learned to exploit hierarchical, multi-layered and deep architectures. These architectures include, for instance, multi-layered long short-term memory networks (LSTM), deep convolutional neural networks (DCNNs) and generative adversarial networks (GANs) [41]. Besides physiological overestimation claims, influenced also by the hype around the concept of AI, such algorithms have obtained consistent and impressive results in many different areas. This wide success has led to a displacement of debates, applications, and experiments in areas other than computer science and mathematics. This displacement has indeed touched social sciences or, more narrowly, specific societal problems. Methods based on machine and deep learning have been used to predict poverty using a variety of data sources, including satellite images [43,44]. The availability of rich and multi-modal data and the strengths of intelligent algorithms have made also possible to study topics related to climate change and models [45,46,47]. Other applications have focused on social work settings, proposing strategies and models relying on AI to minimize violence in homeless youth or to ameliorate the living conditions of homeless people [48]. With different level of sophistication and performance, studies have also addressed crime-related problems [49,50,51,52]. Finally, other relevant applications have focused on agricultural issues [53,54], health care [55,56,57], traffic prediction and transport optimization [58,59] and individual and collective behaviors on social media [60,61,62]. Furthermore, as AI systems are increasingly deployed in everyday lives, they have triggered conceptual, moral and philosophical debates on ethics, fairness, and accountability, reinforcing discussions on the use of these technological and scientific advances for ensuring social good.

The Debate on Algorithmic Decision Making in Criminal Justice and Policing
While the potential of AI for social good remains tremendous, there exist several realms in which the use of intelligent algorithms has raised different types of concerns, in terms of ethics, respect of human rights and political impacts [63, 64,65,66]. In parallel with the vivid debate on themes such as superintelligence and existential risk [67], research groups, policy-makers, and activists are pushing towards the definition of guidelines and the improvements of current practices to make AI safer and more ethical. Autonomous vehicles, face recognition tools, data privacy, biometrics are some of the hot points of the discussion on the pitfalls of unregulated or poorly regulated AI nowadays. Criminal justice and policing, two critical dimensions of research in criminology, are also part of this animated debate. It is worth to note that the success of AI as a concept in the public, industrial and scientific discourse has contributed to a certain degree of confusion, leading also to the (whether deliberate or not) wrong use of expressions related to artificial intelligence, machine learning, and deep learning. This confusion also touched the areas of criminal justice and predictive policing. Indeed, while these predictive models rely on data and statistics, it may not be the case that all of them actually employ intelligent algorithms to complete their tasks. Nonetheless, if not already, it is highly likely that all of them will sooner or later employ some form of statistical learning in their computations. This projection, coupled with scandals related to biased systems and corrupted data, turned on discussions and concerns within the scientific community and the civil society. Criminal Justice risk assessment (CJRA) tools based on data and statistical models have been assisting judges in the American system since the 1920s [49]. Nonetheless, in the last years, following the rapid developments in the design of intelligent algorithms, the diffusion of ML-based risk assessment tools has widespread. These models are used to inform judges on a large number of decisions, including pre-trial release, charging by prosecutors, release on parole, and sentencing (including probation).
For what concerns predictive policing, this technological advancement has a much younger origin than the use of CJRA. Predictive policing (PP) regards the use of mathematical and statistical models to predict the time and location of future crimes using historical data on reported or investigated offenses [68]. As for CJRA, predictive policing and related technologies are now deployed in the US and in many other countries (e.g., United Kingdom, Italy, Netherlands, China), with likely expectations of further propagation in the near future. Yet, the deployment of algorithmic systems based on big data and statistical learning techniques has been contested by many. Several issues arising from the intrinsic nature of these systems and their practical consequences have attracted the attention of the scientific community [69,70]. ML and DL software have been referred by many as black boxes, meaning that, for a variety of reasons, it is often impossible to determine the mechanisms that lead a machine to make a decision. These types of problems increase in scale if we consider that both CJRA tools and PP software are deployed in a highly-sensible public sphere. The consequences of such a decision can potentially change the lives of individuals forever. This is the crucial difference between business-oriented (as recommendation algorithms for e-commerce) and public-oriented AI systems. As an example, studies have shown how many of these systems are fed by corrupted and biased data, giving birth to feedback loops or discriminatory decision making processes that tend to harm disadvantaged communities and ethnic minorities [71,72].
Consequently, organizations, institutions and research groups have recently started to intensify the efforts to design and build fair algorithms and to investigate how current systems are causing unintended bad effects to portions of the society.

Search Strategy and Methods
In order to gather the data to map the existing literature that applies artificial intelligence in the attempt to study crime (in a broad sense), I have performed a search on Scopus database. Scopus contains over 69 million abstract and citation records of peer-reviewed literature in a wide variety of disciplines and it allows to run queries searching for terms in different terms or phrases in different fields. After multiple tests, the chosen query has been (CRIM * or CRIMINAL * or The query has been kept sufficiently broad to avoid the exclusion of relevant records from the search. The working assumption is that publications at the intersection between AI and crime, though specifically directed to particular types of criminal phenomena or methodological approaches, are highly likely to mention general terms such as "crime/s", "criminogenic", "criminal/s", "criminality", "criminalization", "criminology", "criminological" for the crime-related set part, and at least one expression among "machine learning", "deep learning" and "artificial intelligence". Tests with longer and more complex queries, e.g., queries listing different types of crimes or different types of algorithmic approaches, provided fewer results than the general query, given that, although sparse, the results will show a great variety of approaches and crime-related problems. For this reason, this general query has been selected as the most appropriate for the aims of the work. Furthermore, I have tested it in two different fields, namely "TITLE-ABS-KEY" and " ABS". The "TITLE-ABS-KEY" searches the desired word in title, abstract and keywords. Keyword themselves combine different subfields, namely "AUTHKEY", "INDEXTERMS", "TRADENAME" and "CHEMNAME". The "ABS" field, instead, searches the requested words or expressions in the abstract alone. A first test using the "TITLE-ABS-KEY" fields has retrieved a total of 5,161 records. Unfortunately, a random search across the obtained records showed that there was a considerable share of false-positive items. Analyzing such false positive items (namely items that do not deal at all with crime-related topics), I have found that such false positives were driven by errors in the Index Keywords (" 'INDEXTERMS"). The Index Keywords are different from Author Keywords ("AUTHKEY") because they are not provided by the authors. They are, instead, manually added by a team of Scopus professional indexers based on several vocabularies, as the Ei Thesaurus for engineering, technology and physical sciences or MeSH for life and health sciences. Errors in false positives, for instance, included articles that were focusing on computer vision techniques to avoid the corruption of images. The terms corruption, in those specific cases, was wrongly intended as related to the crime of corruption, and therefore the indexed keyword "Crime" was also added to the list. Given the number of such false positives, I have performed the search only scanning the presence of the queried terms in the abstracts. The search finally retrieved 692 items. In terms of methods, besides first descriptive statistics regarding the temporal distribution of the publications in the sample and the comparison with works covering AI topics and applications in general, two different analytical dimensions will be investigated. These dimensions respectively aim at (1) investigating patterns of themes and topics in terms of author and index keywords and (2) studying the structure of co-authorship and country-level collaboration of the considered works. Both aims will be pursued by applying network science as the methodological framework. Network science has proven to be an extremely promising scientific field. Derived from mathematical graph theory, it nowadays encompasses many areas including social networks, biological networks, transportation networks, and communication networks. Among the many areas in which network science has shown its potential, stands the field called "science of science". Science of science is the quantitative study of how scientific agents (e.g., authors, universities) interact, focusing on the pathways that lead to scientific discovery and aims at better understanding what drives successful contributions [73]. Given the perfectly fitting nature of networks in capturing relations between entities, network science has, therefore, become a mainstream approach to unfold the characteristics and patterns across scientific domains. For instance, networks have been useful in studying co-authorship in management and organizational studies [74], the structure of regional innovation system research [75], scientific endorsement [76], trends in creativity research [77], the characteristics of research community and their evolution over time [78]. In light of the success gained by networks in studying how scientists behave and how science occurs, this paper will employ graphs to address the abovementioned aims.

Limitations
There are two layers of limitations for this approach. The first one is inherently related to the fact that Scopus is not the only available database for electronic records of peer-review literature. While Scopus has been used extensively in the literature to survey or map a variety of specific scientific areas [79,80,81,82,83,84,85] Web of Science, for instance, represent a valid alternative. Given that the two different databases contain different journals, proceedings and book series, it may be that the search on Scopus is automatically filtering out publications that are not indexed in that specific database. At this point, since Web of Science only allows to search abstract along with indexed keywords (similarly to the Scopus functionality), the database has been excluded to avoid biased results in terms of false-positives. The second limitation regards the decision to search the desired key-expressions in abstracts alone. There is a certain probability that articles that focus on AI applications for crime-related problems do not mention at least one of the expressions included in the two sets of information in their abstracts. In this case, excluding the keywords (both author and indexed ones) from the search, these records would be excluded from the data gathering.
In summary, the reader shall keep in mind that the results presented in this work are not to be intended as universal, given that the search certainly does not provide the entire universe of publications at the intersection between AI and crime. Nonetheless, given that Scopus is one of the largest databases of scientific literature and that the query is sufficiently broad to guarantee to avoid the exclusion of relevant sources, the results of the study are solid enough for the purposes of the present study.

Data Overview
In total, 692 studies have been retrieved through the abovementioned query. The export options of Scopus allow to obtain a variety of information on each study, ranging from the year in which it was published, to the funding institution. Figure 1 demonstrates a sensible increase in the number of studies that are published every year, especially in the last five years. The trend in terms of citations is less clear, as variance is higher, but overall shows an increasing behavior as well.  It is interesting to compare the trends of yearly publications with the overall trend of works dealing with artificial intelligence (both at the theoretical and applied levels). For this reason, I have performed a search in Scopus excluding the first part of the query (i.e., excluding crimerelated expressions), and considering the same time-frame (namely 1981-2020). The count of studies in Figure 2 shows that the trend is steeply growing in the last 15 years (monotonically in the last 10 years, with the only exception of 2020 which only includes early publications). However, the plot of percent variations that compares the temporal trends of works at the intersection of AI and crime and overall AI publications, better captures the yearly differences between the two (Figure3).  On the other hand, the publications at the intersection between AI and crime were extremely rare and sparsely distributed during the first 20 years. This is probably due to the fact that research in AI was not yet interested in theoretical, mathematical and purely algorithmic reasoning and development rather than societal practical applications or, potentially, the study of artificial intelligence was still confined to a restricted number of scientific and academic fields. To this, it should be added that the fluctuating fortunes of AI in those decades have certainly impacted its diffusion to other areas. After 2000, the number of works has started to sensibly increase. The variations in this case are much more intense but generally positive, with the exception of year 2007 (-%77.77). Notably, in the last three years (2020 excluded), the percent variations of works at the intersection between AI and crime were positive and higher than those for overall AI works (2017: +97.72% against +45.68%; 2018: +88.50% vs +66.98%; 2019: +16.46% vs +9.22%). These figures seem to testify how there is a growing interest for AI application in the realm of crime-related research problems. When focusing on the types of documents obtained from the search (Figure 4), it is interesting to note that the majority of records is related to conference papers (373 against 266 journal articles). This might be due to two factors. First, publishing articles that propose new methodologies may be difficult in peer-reviewed journals, as noted also by [86] and [87]. Second, computer scientists tend to publish papers in conference outlets. Especially when compared to social scientists, this preference can drive the prevalence of conference papers in the present sample [88,89]. Retrieved records have been published across a total of 160 venues (either a conference or book series or a journal). The venue with the highest number of records is Lecture Notes in Computer Science 1 with 34 articles, followed by Advances in Intelligent Systems and Computing (20) and ACM International Conference Proceeding Series (17), Ceur Workshop Proceedings (7) and Proceedings of SPIE -The International Society for Optical Engineering (7). The five most represented journals are Procedia Computer Science (6), Computer and Security (5), Interfaces Two considerations emerge from these numbers. First, works on AI+crime are sparsely distributed across an heterogeneous and wide number of venues. This means that a proper homogeneous subfield of research has not developed yet, and that scientists have not yet found a proper dedicated venue. This heterogeneity and lack of cohesiveness is also demonstrated by the fact that among the most frequent venues (although they each account for 1.32% of the total venues) are a journal that is allegedly connected to a predatory publisher and that has been indexed by Scopus in 2018 and a non-Western criminology journal that has been founded in 2016. Second, and connected to this latter point, it is interesting to note that Western criminology journals are marginally present in the list of venues (the only Western criminology and criminal justice journals that are reported are "Crime science", "Journal of Criminal Justice Education", and the "Journal of Quantitative Criminology"). This may suggest that specialized journals in these fields may not be ready to embrace sophisticated new methods derived from computer science and AI. Alternatively, it may be that authors working at the intersection between crime and AI are prominently from fields other than criminology and criminal justice, and potentially mainly from computer science, thus making criminology journals less attractive for their careers and research aims.

Graphs of Author-and Index-Keywords: Patterns of Themes and Topics
Keywords are a useful variable to measure the evolution of scientific production. This also applies to the literature at the intersection between AI and crime. Figure 5 shows the temporal trends of keywords in the last twenty years. The plot highlights how, as the number of publications increases, so do the number of author and index keywords. The higher number of index keywords is driven by the fact that Scopus does not bound them to a fixed quantity, while authors usually have a maximum number of keywords to be listed in their publications. Overall, such figures suggest that not only the interest of researchers for AI applications for crime-related problems has sensibly grown over the past two decades. It also indicates that the number of topics, algorithms and problems being investigating is augmenting over time. The yearly increase in size of the literature on AI and crime is followed by a parallel growing heterogeneity of research problems. In order to try to understand what are the most common topics investigated in this area, keywords have been processed as to create graphs of co-occurrence across publications. Using all the keywords (both index and author ones) included in the dataset, two separate matrices of co-occurrence was created. Two distinct graphs, in the form G = (V, E, W ), where V is the set of nodes (keywords), E is the set of edges mapping connections (co-occurrence across publications) among keywords and W maps the set of weights associated to each edge (namely, the number of times two keywords are related). Table 1 highlights the most important features of the two networks as a whole. What immediately emerges from the table is that the two graphs have sensibly different characteristics. The Index keyword graph has many more nodes (i.e., keywords) and edges, also in proportion to the Author-keyword graph, resulting in a higher density of the given network as a whole. In relation to this, the Author-keyword graph has longer characteristic path length and diameter compared to the Index-keyword graph, suggesting that the former is much more sparse and disconnected. The disconnectedness of the graph is testified also by the value of network fragmentation, which map the proportion of nodes that are disconnected in the whole set. As it can be seen, the Author-Keyword graph includes a considerable number of small components (1 isolate, 3 dyads, 19 triads), while the Index-Keyword Graph has only two components: a dyad, and the core one which accounts for 99.999% of the total of nodes. These differences in the graphs are due to the distinct nature of keywords used to characterize each publication. Author keywords are much more discretionary, as the choice is given to the authors, while in the case of Index keywords, the procedure is much more standardized and it is carried out by professional indexers based on several available thesauri. On one hand, In spite of the higher number of keywords (i.e., nodes in the graph), Index keywords are more densely connected and may be less useful in capturing existing patterns in publications. On the other hand, the standardized procedure employed for categorizing studies by Scopus reduces the issue of having same words written differently (e.g., with capital letters, in British or American English). Figure 6 shows the kernel density estimation and the distribution of the centrality values of in the binarized author keyword and index keyword graphs. Author keywords are much more clustered around values very close to zero, further highlighting the sparseness of topics. When index keywords are considered, the picture sensibly changes, in spite of a prominent left-skeweness of the distribution. Index keywords, compared to author ones, are more densely connected.    Tables 2 and 3 specifically consider the ten most central keywords overall. Table 2 demonstrates the very high popularity of Machine Learning as a keyword used by authors in their work, with a centrality of 0.26 (meaning that 26.7% of the whole set of 1719 keywords chosen by authors are associated with Machine Learning). The broader expression Artificial Intelligence is the second-most central word, followed by Deep Learning and Data Mining. Classification appears to be the most common performed task by scholars in the sample, as it is ranked fifth in the overall list, and Random Forest and Neural Networks are the two most popular classes of algorithms (and the only ones present within this specific list). The most central keywords in the index graph are partially overlapping with the ones found in the author ranking. Learning Systems is the most popular (50.2% of the keywords are associated to this particular keyword), followed by Crime. Artificial Intelligence, Machine Learning and Deep Learning, the three AI-related expressions used for the search query, are ranked third, fifth, and seventh respectively. Classification (of Information) is ranked tenth and further indicates the prevalence of classification tasks within the sample of works retrieved from Scopus.
While tables 2 and 3 reported the most central keywords overall, tables 4 and 5 specifically report the ten most central crime-related keywords. Crime-related keywords are keywords that are connected somehow with criminal phenomena, criminology areas or criminological topics. From both tables emerges the prevalent interest of scientists for cyber-related topics. In Table  4, Cybercrime is ranked third, Malware is fifth, Security, a word which is generally connected with the cyber-sphere, is sixth, Cyber Security and Phishing are ranked seventh and eighth respectively. Finally, although Fraud Detection is not inherently cyber-related, many applications in fraud detection studies encompass digital or computer-related frauds. In spite of different specific keywords, the picture is substantially similar in the index graph. Computer crime is the second-most central keyword, followed by Network Security and Malware. Other cyber-related popular keywords are Security Systems and Intrusion Detection. These results provide a clear picture of the most trending topics in the area at the intersection between AI and crime. What I have broadly defined as cyber-related topics are extremely popular across both graphs, and their prevalence is even more evident considering the almost complete absence of keywords related to other areas of crime and criminology (with the exception of Criminal Law in the author graph and Law Enforcement and Forensic Science in the index graph).     Two complementary explanations could help in decoding the central role of cyber-related keywords in both graphs. First, cyber-related topics, which fairly new with respect to other criminal phenomena, have witnessed a constantly growing interest of researchers enhanced by the inherent hybrid nature of crimes belonging to this sphere (both humans and machines are involved), naturally favored trans-disciplinary research across domains such as criminology and computer science. Second, generally data sets for cyber-related crimes are much wide and richer compared to other data sets recording information for other crimes (e.g., robberies), potentially due to the intrinsic digital nature of crimes occurring in the cyber domain. This would therefore facilitate data availability for scientists. In spite of the vibrant debate around algorithmic decision making processes in policing and criminal justice, this analysis suggests the peripheral role of keywords associated to these two areas in the sample. 2 Similarly, keywords related to extremely critical and relevant topics such as transparency, bias, fairness and ethics are also peripheral in both graphs, suggesting that, so far, researchers are mostly interested in applications rather than societal and ethical implications of research that applies AI algorithms to crime-related issues. Transparency is ranked 71st in the author graph and 398th in the index one. Bias is ranked 918th in the author graph and (Intrinsic) Bias 1481st in the index one. Fairness is ranked 151st in the author graph and 1370th in the index one. Finally, Ethics is ranked 152nd in the author graph and (Codes of ) Ethics is 3674th in the index one. 3 The network-based analysis of keywords co-occurrence is relevant to understand what are the most common areas upon which researchers are focusing but can also be interesting in highlighting what could be likely developments in the future. In fact, as Figures 7a and 7b show, there is a clear relation between the centrality of a certain keyword and the sum of times works using that given keyword have been cited. Furthermore, after calculating the prevalence of each keyword (namely the share of the number of papers in which a given keyword is used out of the total of works in the sample), data reveal an almost overlapping positive relation also between citation count and prevalence. This, in general, means that the more a keyword is central in the co-occurrence graph and the more it is common, the higher the number of citations that a work using that specific keyword will receive. This interestingly related to the finding commented above regarding specific themes or topics that are not yet particularly popular in works at the intersection between AI and crime, especially when compared to the whole universe of keywords employed either by authors or professional indexers.

Graphs of Collaboration: Authors and Countries
The degree to which authors are connected through publication co-autorship is another relevant way to map the state of a specific academic area. With this regards, analyzing the graph of co-authorship of scholars that have authored publications in the present sample can enhance the knowledge on scientific inquiry at the intersection between crime and AI. Figure 9 illustrates the co-autorship network with a component layout that highlights the different groups of researcher collaborating together. What immediately emerges is that the graph is particularly disconnected.
In Figure 9, one large component consisting of 227 authors accounting for 13.59% of the total number of scholars in the sample stands out, while the remaining authors are grouped into smaller components. Figure 9: Graph of Authorship and Scientific Collaboration -Component Layout. Links are log-scaled in terms of width (number of collaborations) and Nodes are log-scaled in terms of total degree centrality Table 6 quantitatively pictures the structure of the graph. The sample includes a total of 1,964 authors. The density of the network is extremely low, with a total of 59 authors that are isolates. The same pattern is found when focusing on the number of dyads and triads (collaboration between pairs of authors). Overall, 134 dyads and 115 triads are present in the affiliation graph, meaning that more than 30% of researchers are connected either to a single or two other authors. The larger component includes 227 different researchers. The very high degree of disconnectedness of researchers working in this area represent a relevant finding. The sparseness of individual-collaboration and the tendency to work in siloed-groups describes a situation in which the circulation of new ideas and the inclusiveness of research projects are not favored. Furthermore, this structure is likely connected to the presence of "transient" researchers that contribute to a particular research area only by publishing one or very few papers [90]. Whatever the causal relation (if any) between disconnectedness and the presence of transient researchers, the coupling of these two phenomena discourages the formation of new theories, the replication of research findings and the development of an homogeneous corpus of literature. Furthermore, and beyond the need for theoretical reasoning, collaborative research is generally found to be more impactful in terms of citations and attractiveness of funding [91,92,93].  Further information can be gathered through the analysis of the graph mapping the collaborative relations between countries. Based on the country in which the affiliation (e.g., research lab, group) is based, I have drawn a network where each link quantifies the number of papers published between countries i and j. If, for example, researchers a and b, based in two different labs located in two countries i and j, have published 3 papers together, a weighted link (with weight equal to 3) is created. Figure 10 displays the graph of relations between countries. Overall, affiliations from 77 countries are present in the entire sample. A total of 17 countries appear as isolated, meaning that no collaboration with foreign affiliation is present in the data. 4 In order to inspect the extent to which countries collaborate, I have also calculated the percentage of works with international collaborations out of the total of collaborations. Nodes in Figure 10 are sized by the share of domestic collaborations (1-(share of international collaborations)).  Figure 11 shows instead the distribution of the international share of collaborations across countries. What emerges is that, on average, there is a low level of international collaboration in terms of publications at the intersection between AI and crime. Besides isolates which trivially only have domestic collaborations, 39 countries (50.64% of the total) have an international share equal or lower than 0.25, meaning that collaborations are, in at least 3 cases out of 4, only between research groups and labs based in that same country. From the international standpoint, the most international countries are Kenya (0. To better assess the structure of the graph of international collaborations emerged from the data, I have also calculated the binary centrality, the weighted centrality and an indicator of relative presence of a given country in the sample of 692 works at the intersection between AI and crime. The binary centrality is simply the normalized centrality in the range [0,1] calculated from the network of binary interactions (i.e., collaborations) between countries. The network is the binarized form of the original weighted, in which if two countries i and j have a number of collaborations which is ≥ 1, then the entry in the matrix becomes 1, with 0 otherwise. The weighted centrality is, instead, the normalized centrality computed from the original weighted matrix of collaborations. Finally, the indicator of relative presence simply captures the extent to which a country is represented in the total sample. For a country i, the indicator is given by the ratio between the number of works in which at least one author has an affiliation in the country i and the total number of studies, i.e. 692. Some relations emerge ( Figure 12). First of all, the relation between international share and centrality is very low in the binary case and even negative when the weighted matrix is considered. This means that, on average, most central countries tend not to collaborate internationally in this area and that, conversely, collaborations between countries that are peripheral to the network are instead more common. This finding is somehow confirmed also by the negative relation (r=-0.15) between international share and relative presence and by the very high correlation (r=0.97) between weighted centrality and relative presence. Countries that are very present in the sample 5 tend to collaborate mostly with other departments or labs based in their same territory. This might be related to the unequal distribution of resources and awarded grants across the world, which pushes peripheral countries to connect together in order to overcome structural inequalities of science. An additional hypothesis may concern the interest of central countries to potentially maintain their knowledge within their borders, especially given the certainly critical blurred area intersecting crime and AI. Following, Table 7 lists the average and standard deviation of the international share values divided per continent. America, which includes 8 countries including United States and Canada that are particularly central in the collaboration network, is the least internationally oriented continent, with an average value of international share equal to 0.14, and the lowest standard deviation (0.16) of the sample. Besides Oceania, which only records two countries and therefore does not allow to build sufficiently meaningful reasoning, Africa is the most internationally oriented continent (0.36). Data show how, in general, how research groups, departments and labs based in Africa are seeking to engage in networks of collaboration across boarders and how, instead, more central countries in terms of scientific production show a lower tendency to work with foreign entities. This finding interestingly relates to the debate regarding the necessity of favoring inclusion and diversity in the broader AI research considering that African countries are, on average, the least central both in binary and weighted centrality scores and show the lowest values of relative presence (0.0042, meaning that African countries are, on average, present with affiliation only in 0.42% of the works in the sample). Conversely, American countries are on average the most present but, as reported above, show the lowest values in terms of international collaborations. When analyzing data by discriminating per continent, patterns emerge that indicate how peripheral countries struggle in engaging with more central countries. This is, along with the structural disconnections in the authorship network, another fundamental obstacle in the formation of a global community of scholars and institutions working at the intersection between AI and crime-related research problems.  The exponential diffusion of AI applications in many scientific domains outside of traditional areas in which intelligent algorithms are developed, like computer science, engineering, and mathematics, has influenced also research on criminal behavior and crime-related topics. This process has been favored by several factors. These include the increasing open availability of data on crimes and offenders, the interest of scholars from disciplines outside of criminology and social sciences for such topics and the growing accessibility of AI algorithms via statistical software and programming languages.
In spite of these aspects, research lacks an assessment of published works at the intersection between AI and crime. The present work attempts to fill this gap providing a quantitative analysis of literature in this area. Data are gathered from Scopus, an electronic database containing over 69 million records, and are analyzed using network science. The performed search provided a total of 692 research items, temporally distributed from 1981 to 2020. The analysis is divided into two main dimensions. First, keyword co-occurrence graphs are investigated, using both authors-and index-keywords, to highlight patterns of themes and topics in the literature. Data indicate that scientists publishing in this area are mostly interested in cyber-related criminal topics such as Cyber-crime, Malware, Phishing, and Intrusion Detection.
Conversely, topics that have gained the attention of non-specialists, activists, and policy-makers after several scandals such as algorithmic fairness, discrimination, bias, and transparency are largely overlooked. Furthermore, the analysis indicates that the higher the centrality of a keyword, the higher the number of citations that a work using that keyword will receive. Second, co-authorship and country-level collaboration networks are considered to assess the structure of scientific collaboration at the individual and national levels. The graph of authorcollaboration reveals a highly disconnected structure: the total 1,964 scientists that have authored at least one work in the sample are divided into many components that include 59 isolates, 134 dyads, and 115 triads. When countries are taken into account, considering the primary affiliation of authors (to exemplify: if a research A publishes a paper while affiliated with Harvard University, its country-affiliation will be processed as United States), further patterns emerge. Most central countries (countries that have a higher number of international collaborations without considering domestic ones) and most prevalent ones (namely countries that are more present than the others when affiliations are considered) tend to be less internationally collaborative when controlling for feedback loops. This means that, on average, researchers from these countries (e.g., United States, China, India) prefer to collaborate with scientists affiliated with institutions based in the same country. These two layers of findings can help in shaping broader discussions regarding the interplay between the current state of research at the intersection between AI and crime and its future developments. Given that Scopus data show that works in this area are, in proportion, increasing more in terms of quantity compared to works covering AI problems overall, it is crucial to assess the likely pathways that this research area may take tomorrow. With regard to themes and topics, the large interest in cyber-related themes suggests, in contrast, the underdevelopment of applications in other relevant criminological or crime-related areas. Additionally, analyses reveal how scientists are overlooking critical topics regarding ethics and responsible use of intelligent algorithms in areas such as criminal justice and policing. Given that keyword, centrality is tightly related to citations, and that citations can predict future research directions, resource allocations and even recruitment processes [94,95,96], it is necessary to timely increase the number of works focusing on ethics and related matters to enhance scientific debate on the need for responsible use of algorithms. Responsible use of algorithms encompasses several issues, such as avoidance of machine-bias and discrimination against minorities or disadvantaged strata of the population. Given that algorithmic decision-making is increasingly deployed in the real world, impacting the life of millions of citizens worldwide, the attention on technical applications of AI systems in crime-related problems should be balanced with works that focus on societal, political, legal and moral consequences of such intelligent systems.
For what concerns co-authorship and country-level collaboration patterns, additional considerations cane made. First of all, the highly disconnected structure of co-authorship may represent an obstacle to the development of a structured and solid community. Given the highly transdisciplinary nature of the debate at the intersection between AI and crime, scientific collaboration is crucial to guarantee a debate that overcomes structural barriers and asymmetries. In fact, if researchers will continue to publish within this component-based structure, it will become difficult to establish grounded debates and inclusive cooperation. Inclusive cooperation indeed represents an issue, given the current state of international collaboration. Due to several causes (e.g., the disparity of resources, critical domain of application), most central and productive countries tend to avoid international collaborations. Contrarily, developing countries that are, in general, less productive, are trying to engage in international partnerships to counterbalance the lower availability of funds, grants, and resources to conduct research in this area. This asymmetry reinforces exclusion in international research and disallows such peripheral countries in joining scientific production and debate. A Western-centric standpoint in discussing applications and consequences of AI systems applied to investigate or reduce crime-related problems can reinvigorate structural differences between countries. Given that research in this area often refers to the possibility to deploy intelligent systems in real-world scenarios, it is fundamental to avoid the risks of such future increasing inequalities. Less disconnected and more transnationally-oriented scientific collaboration both at individual and country levels can help in addressing these aspects.