Affiliated Research Papers
- TARA+: Controllability-aware Threat Analysis and Risk Assessment for L3 Automated Driving Systems
- Edge Computing to Support Message Prioritisation in Connected Vehicular Systems
- Analysis of cyber risk and associated concentration of research (ACR) 2 in the security of vehicular edge clouds
- A Comparative Study of Cyber Threats on Evolving Digital Identity Systems
- Human factors for vehicle platooning: A review
- A Comprehensive Survey of Threats in Platooning—A Cloud-Assisted Connected and Autonomous Vehicle Application
- Customers’ perception of cybersecurity risks in E-commerce websites
- Considerations for secure MOSIP deploymentThe impact of message encryption on teleoperation for space applications
- Securing Cloud-Assisted Connected and Autonomous Vehicles: An In-Depth Threat Analysis and Risk Assessment
- Behavioural analysis of COVID-19 vaccine hesitancy survey: A machine learning approach
- Challenges in Threat Modelling of New Space Systems: A Teleoperation Use-Case
Books
Case Studies 1: How Deep Fakes and Propaganda Are Reshaping Reality
Summary: This case study explores the impact of deep fakes in spreading disinformation and misinformation, distinguishing between the two, and proposes a novel approach to identify deep fakes without stifling innovation. The study highlights the potential dangers posed by deep fakes in manipulating public opinion, damaging reputations, and eroding trust in media sources and information dissemination.
Team: The case study was conducted by a multidisciplinary team comprising experts in AI technology, cybersecurity, media studies, and ethics. The team collaborated to investigate the phenomenon of deep fakes and their societal implications.
Case Study Description:
Deep Fakes and Disinformation:
- Deep Fakes and Disinformation: The study provides an overview of deep fakes, which are synthetic media generated using artificial intelligence techniques. It delves into the ways in which deep fakes can be employed to intentionally spread disinformation, including the malicious manipulation of audiovisual content to deceive and mislead viewers.
Deep Fakes and Misinformation:
The case study differentiates between disinformation and misinformation. Disinformation refers to the deliberate creation and dissemination of false information with the intention to deceive. Misinformation, on the other hand, involves the inadvertent sharing of false or misleading information without malicious intent. The study examines how deep fakes can contribute to both forms of false information dissemination.
Impact on Society and Democracy:
The case study explores the potential consequences of deep fakes on society and democratic processes. It highlights the challenges faced by individuals, organizations, and policymakers in discerning the authenticity of media content, as well as the erosion of trust in traditional sources of information. The study emphasizes the need for robust methods to identify deep fakes while preserving the integrity of authentic media.
Novel Approach to Identify Deep Fakes
In response to the growing threat of deep fakes, the case study proposes a novel approach to identify and combat their proliferation. The approach involves a combination of AI-based detection algorithms, crowdsourced verification systems, and partnerships between technology companies, academia, and media organizations. The goal is to create an ecosystem that enables the identification of deep fakes without stifling innovation and creativity in the AI field.
Ethical Considerations:
Ethical Considerations: The case study addresses the ethical implications associated with the detection and mitigation of deep fakes. It emphasizes the importance of striking a balance between protecting against malicious use of AI technology and preserving privacy, freedom of expression, and the potential benefits of AI innovation.
Case Study 2: Navigating the AI Revolution
Summary:
- AI regulation is coming. Governments around the globe have been actively discussing how they can control this new technology, and politicians in the EU’s parliament recently approved the world’s first AI rules.
- If history is any indication, the EU’s AI rules could become the standard for every country and region.
- Reviewing these regulations is therefore required to understand what’s coming, so today I’m going to give you a brief history of the EU’s AI rules, tell you what they say, explain what they could mean for you, and assess whether AI regulation is even possible.
- EU AI Regulations: The EU’s AI rules have their roots in the Block’s digital decade initiative, the second phase of which began in 2020 around the time the pandemic got underway.
- Like all the other initiatives that nobody voted for, the EU’s digital decade initiative seeks to completely transform the continent by the year 2030.
- The EU’s AI act seems to be another example of surprisingly reasonable regulation, but the only caveat is that the AI Act is a regulation, meaning that it will override any AI regulations that European countries already have in place without the input or approval of their citizens.
- The text of the AI Act was published by the European Commission in April 2021, long before OpenAI had introduced ChatGPT.
- For context, only the European Commission can table policy proposals in the EU, and these proposals are then debated by the European Parliament until they are passed.
- OpenAI has reportedly been lobbying EU politicians to ensure that their AI rules don’t overburden the company, and the final text approved on the 14th of June ensures that OpenAI’s technologies operate with less scrutiny.
- Governments will need to balance allowing AI to evolve while preventing it from evolving into something they don’t want.