cif-seminars.github.io

Next Seminar | Mailing List | #CIFSeminarsKUL

CIF Seminars @ KU Leuven

A monthly series of interdisciplinary seminars around legal and technical topics related to cybersecurity and online privacy, organised by the CiTiP, COSIC, and DistriNET research groups of KU Leuven. Subscribe to our mailing list or follow us on Twitter #CIFSeminarsKUL for talk announcements. A playlist with all the recorded sessions is available on YouTube, presentation slides are available below. All dates and times are in Brussels, Belgium time zone, CET/CEST.


Upcoming Seminars


Next Seminar

Facial recognition in the wild: All Eyes on Clearview AI (by Dr. Catherine Jasserand)

When Thursday, 25 November 2021, 12:30 - 13:30
Location online only
Conferencing https://bbb.tbm.tudelft.nl/b/jan-ajw-cov-fzj
Abstract In January 2020, the New York Times revealed that a facial recognition software company, Clearview AI, had scrapped billions of face images from social media and set up a platform offering services to law enforcement authorities. The revelations triggered a chain of reactions: the company rapidly faced legal challenges in the US, and several data protection authorities started investigations worldwide.
The talk will explain the platform’s functioning before digging into the data protection issues raised by Clearview AI’s practices. It will discuss the interpretation of the extraterritorial scope of the GDPR to apply the GDPR rules to a company neither established nor represented in the EU. It will also explain that law enforcement authorities in the EU lack legal grounds to use the platform (based on the national rules implementing the ‘Law Enforcement Directive’). Finally, it will wrap up with an overview of the legal complaints against the company (in Europe and the US) and against the police authorities using its services (such as in Sweden).

Previous Seminars, Slides & Recordings

Sex, Work, and Technology: Lessons for Internet Governance & Digital Safety (by Dr. Elissa M. Redmiles)

When Thursday, 21 October 2021, 12:30 - 13:30
Abstract Sex workers sit at the intersection of multiple marginalized identities and make up a sizable workforce: the UN estimates that at least 42 million sex workers are conducting business across the globe. Sex workers face a unique and significant set of digital, social, political, legal, and safety risks; yet their digital experiences have received little study in the CS and HCI literature. In this talk we will review findings from a 2-year long study examining how sex workers who work in countries where sex work is legal (Germany, Switzerland, the UK) use technology to conduct business and how they have developed digital strategies for staying safe online and offline. We will then describe how these findings can inform broader conversations around internet governance, digital discrimination, and safety protections for other marginalized and vulnerable users whose experiences bisect the digital and physical.
Recording Sex, Work, and Technology: Lessons for Internet Governance & Digital Safety (by Dr. Elissa M. Redmiles) Discussion
When Thursday, 07 October 2021, 12:30 - 13:30
Abstract In April 2021, the maintainer of the Linux stable kernel branch, Greg Kroah-Hartman, asked the kernel community to cease accepting patches from the University of Minnesota (UMN) and launched a revision of all previously accepted submissions. The events that led to this strong community reaction start with a paper submitted by a research team at UMN for publication under the title “Hypocrite Commits”. The paper is the result of a research project initiated in 2020 which concerned the intentional submission of patches that introduce flaws into the kernel. Indeed, after a reportedly silent period of about seven months, the research team behind this project began to submit patches of notoriously poor quality that were quickly caught by the community. Believing this was an attempted imposture, the community halted acceptance of further UMN contributions to the kernel, prompting immediate re-review of all prior submissions. On 5 May 2021, the Linux Foundation’s Technical Advisory Board published a “Report on University of Minnesota Breach-of-Trust Incident” where details of the affair were disclosed and suggestions on how the community can move forward were made, aiming to overcome the conflict.
This talk aims to explore the legal and ethical limits to cybersecurity research through the prism of this recent case and, more generally, open source software projects. Cybersecurity resarch is notorious for its (somewhat undertheorised) ethical issues, such as ethical hacking, responsible vulnerability disclosure, and encryption export controls. Community-driven free and open source software projects are an easy target for dishonest practices. When the concerned project is of the scale and importance of the Linux kernel, however, the question arises whether researchers embarking on such ethically questionable practices also expose themselves to potential legal liability, beyond the curt reprimand delivered by the community anyway. The Linux kernel is often considered the most important open source project in the world; it is vital for the operation of a growing number of safety-critical devices and systems, incl. space exploration missions such as the Perseverance rover. Some may see this as pushing the envelope of this case, yet we cannot but ask whether there are limits to ‘hypocrisy’ in research when life and limb may be at stake.
Slides & 20211007-emanuilov-hypocrite.pdf
Recording "Hypocrite Commits": what are the legal and ethical limits to cybersecurity research? (by Ivo Emanuilov) Discussion

Empowerment through self-diagnosis: A critical perspective on the empowerment rhetorics around self-testing apps (by Alexandra Kapeller and Iris Loosman)

When Tuesday, 07 September 2021, 12:30 - 13:30
Abstract Direct-to-consumer self-testing apps offer users the tools to self-check for various medical conditions, giving them access to information about their health status without intervention by health care professionals. These apps gain in popularity and are marketed with promises of empowerment. Although empowerment promises are common in the field of mobile health (mHealth) we argue that these rhetorics do not match with what self-testing apps can offer. We start this article by postulating distinctive features of self-testing apps, contrasting them with other closely related mHealth services. Then, we examine the discourse around empowerment in relation to health, mHealth, and self-testing apps. Third and last, we analyse to what extent self-testing apps can contribute to empowerment in the form of control and knowledge. While empowerment as control in the literature implies control over health decisions, it is presented as control over health itself in self-testing app advertisements. Self-testing apps suggest that health can and should be controlled, while a positive result is a clear example of how there are aspects of the user’s health that are out of their control. Similarly, diagnostic information about a current health status is distinguished from knowledge about how to contextualize the test result. A self-test result is not necessarily empowering users to pursue actions but instead creates situations of vulnerability in which users need guidance and care. Therefore, although not questioning the usefulness of self-testing apps, we disagree that they should be called empowering.
Slides 20210907-kapellerloosman-diagnostics.pdf

Rise of the Robots: Exploring Some Risk and Assurance Challenges for RAS (by Dr. Nikita Johnson)

When Tuesday, 08 June 2021, 12:30 - 13:30
Abstract The emergence of robotic and autonomous systems (RAS) in many safety-critical domains has significantly increased the number of assurance and certification challenges. These include (i) technical challenges such as minimising uncertainty, identifying risk and developing approaches for achieving an acceptable level of risk, and (ii) socio-technical challenges such as understanding the impact of RAS on existing concepts, competence needs, processes, tool support, and organisational and regulatory structures. In addition to the challenges for RAS, for existing, ‘traditional’ systems there are growing concerns about how to identify and manage risk across system safety and cyber security.
In this presentation, using the Safety-Security Assurance Framework (SSAF) approach applied to a real-world example, we explore the implications of uncertainty from both RAS and safety-security interactions. The focus is on the impact on practice, existing organisational structures and risk acceptance. We will conclude by discussing possible solution approaches for each of the challenges raised.
Slides & 20210608-njohnson-co-assurance.pdf
Recording Rise of the Robots: Exploring Some Risk and Assurance Challenges for RAS (by Dr. Nikita Johnson) Discussion
When Thursday, 06 May 2021, 12:30 - 13:30
Abstract As millions of users browse the Web on a daily basis, they become producers of data that are continuously collected by numerous companies and agencies. Website owners, however, need to become compliant with recent EU privacy regulations (such as GDPR and ePrivacy) and often rely on consent pop-ups to either inform users or collect their consent to tracking.
This talk delves into the subject of compliance of tracking technologies and consent pop-ups with the GDPR and ePrivacy Directive, and offers a multi-disciplinary discourse from legal, technical and even design perspective. We present our recent works on detection of Web tracking, compliance of consent pop-ups, and identification of gaps between law, design and technology when dark patterns are used in consent pop-ups.
This talk covers our recent publication in computer science, law and human-computer interaction domains: “Are cookie banners indeed compliant with the law? Deciphering EU legal requirements on consent and technical means to verify compliance of cookie banners.” (Int. J. on Technology and Regulation, 2020), “Missed by Filter Lists: Detecting Unknown Third-Party Trackers with Invisible Pixels” (PoPETS 2020), “Do Cookie Banners Respect my Choice? Measuring Legal Compliance of Banners from IAB Europe’s Transparency and Consent Framework” (IEEE S&P 2020), “Purposes in IAB Europe’s TCF: which legal basis and how are they used by advertisers?” (APF 2020), and “Dark Patterns and the Legal Requirements ofConsent Banners: An Interaction Criticism Perspective” (ACM CHI 2021).
Slides & 20210506-bielovasantos-tracking.pdf
Recording Web tracking, consent pop-ups and dark patterns: Legal and technical perspectives (by Prof. Nataliia Bielova and Prof. Cristiana Santos) Discussion

Circumvention of tracking protections by means of first-party tracking (by Yana Dimova)

When Tuesday, 06 April 2021, 12:30 - 13:30
Abstract When browsing the web, users are continuously exposed to ads. Online trackers place cookies in the user’s browser, which are then sent to them on each webpage visit. This practice allows them to build an overview of the browsing behavior and interests of the user in order to provide more relevant ads. These privacy concerns have been met with regulatory frameworks such as GDPR, but also with tools that aim to protect users’ privacy such as adblockers and tracking protection features from browsers. As a reaction, trackers are exploring new techniques to circumvent existing privacy-protecting tools. Our work investigates one of those techniques, called CNAME-based tracking. In this scenario, the tracking script is executed on a subdomain of the website, therefore bypassing tracker blocking tools, which mostly rely on a list of known third-party tracking domains. Furthermore, our study shows that, while the popularity of this technique is increasing, it introduces certain threats for users. Personal data of users that is meant to be exclusively shared with the website, is leaking to the trackers. Furthermore, the use of CNAME-based tracking introduces vulnerabilities affecting thousands of websites and their users. Here is a link to our paper on “The CNAME of the Game: Large-scale Analysis of DNS-based Tracking Evasion” (PETS 2021).
Slides & 20210406-ydimova-cname.pdf
Recording Circumvention of tracking protections by means of first-party tracking (by Yana Dimova) Discussion

Is the law getting outpaced by autonomous vehicles? (by Charlotte Ducuing and Orian Dheu)

When Thursday, 4 March 2021, 12:30 - 13:30
Abstract Building on their alleged societal benefits, autonomous vehicles have raised both high hopes and enthusiasm from various stakeholders. Though such complex vehicles are not yet roaming our roads, regulators and policymakers alike have started reflecting on the potential legal and regulatory effects these novel artefacts could have on existing frameworks. This presentation will look at the extent to which the ‘autonomization’ of cars implies legal and regulatory disruption, more specifically through the lens of a potential shift towards a complex eco-system and the foreseen servitization of road mobility (for instance with MaaS-based business models). With respect to road vehicles, the law indeed plays a major role in ensuring that safety and security are maintained, through i.e. certification, technical regulations and liability. Do AVs disrupt the law in that respect? The following questions will particularly be discussed: First, can we ascertain who is liable in case of AV-caused accident? If so, is it fair for this(ese) person(s) to be held liable and is it in line with the aim to ensure safety and security? Second, do increasing dynamic cybersecurity threats and/or the dynamicity of ML models challenge design-based technical regulations of road vehicles?
Slides & 20210304-ducuingdheu-outpaced.pdf
Recording Is the law getting outpaced by autonomous vehicles? (by Charlotte Ducuing and Orian Dheu) Discussion

(De)constructing ethics for autonomous vehicles: What was the question again? (by Prof. Bettina Berendt)

When Thursday, 25 February 2021, 12:30 - 13:30
Abstract Autonomous vehicles (AV) have become one of the favourite examples of “AI ethics” debates. They appear to present an excellent real-life application of the Trolley Problem, a well-known thought experiment in philosophical ethics, and thereby facilitate meaningful debates on machine ethics. In this talk, I want to challenge these assumptions and report on alternatives tried in various settings involving computer scientists and others interested in AI ethics (teaching and workshops). The goal is to engage engineers in reflecting on their work, and ultimately to help them transform these reflections into creating different systems-with-AI.
I will briefly describe the simple (but wide-spread) idea that the AV should “decide whom to sacrifice in an accident dilemma situation”, argue why it is misleading, but also illustrate how the topic can be used differently towards more productive AI-ethics discussions. Among other things, productive discussions should question what the question (regarding autonomous cars or other issues) is. I will describe adversarial and deconstructive methods for opening AI-ethics discussions, show example results of applying them to AV problems, highlight the role of domain knowledge (including multi-disciplinary aspects such as legal questions), and close with many open questions – which I look forward to investigating together with you!
Slides & 20210225-bberendt-autonomous.pdf
Recording (De)constructing ethics for autonomous vehicles: What was the question again? (by Prof. Bettina Berendt)

Proximity tracing with Coronalert: lessons learned (by Prof. Bart Preneel)

When Tuesday, 09 February 2021, 12:30 - 13:30
Abstract The corona pandemic is the first major pandemic in times of big data, AI and smart devices. Some nations have deployed these technologies a large scale to support a trace/quarantine/test/isolate strategy in order to contain a pandemic. However, serious concerns have been raised w.r.t. the privacy implications of some solutions, which makes them incompatible with privacy and human rights that are protected by EU law. This talk focuses on the proximity tracing solution developed by the DP-3T (Distributed Privacy-Preserving Proximity Tracing) consortium. This app has been rolled out in more than 40 countries and states, with support of Google and Apple. We will provide some details on the experience with the Coronalert app in Belgium that is connected to the European Federated Gateway Service, which at this moment has 11 EU countries and more than 40 million users. The talk will discuss the lessons learned from this large-scale deployment in which the principles of privacy-by-design and data minimization have played a central role.
Recording Proximity tracing with Coronalert: lessons learned (Prof. Bart Preneel) Discussion

Recent developments concerning confidentiality of communications and the use of technology to combat child sexual abuse online (by Dr Brendan Van Alsenoy)

When Tuesday, 19 January 2021, 12:30 - 13:30
Abstract On 10 September 2020, the Commission published a Proposal for a Regulation on a temporary derogation from certain provisions of the ePrivacy Directive (2002/58) /EC as regards the use of technologies for the purpose of combatting child sexual abuse online by “number-independent interpersonal communications services”. The measures envisaged by the Proposal would constitute an interference with the fundamental rights to respect for private life and data protection of all users of very popular electronic communications services, such as instant messaging platforms and applications.
This seminar will discuss the main considerations and recommendations provided by the EDPS in Opinion 7/2020 in relation to the Commission Proposal, including the safeguards that are necessary to ensure that the persons whose personal data is affected have sufficient guarantees that data will be effectively protected against the risk of abuse.
As the EDPS underlines in its Opinion, the issues at stake are not specific to the fight against child abuse but to any initiative aiming at collaboration of the private sector for law enforcement purposes. If adopted, the Proposal, will inevitably serve as a precedent for future legislation in this field.
Slides & 20210119-bvanalsenoy-confidentiality.pdf
Recording Recent developments concerning confidentiality of communications and the use of technology to combat child sexual abuse online (by Dr Brendan Van Alsenoy)

Technological testing Grounds: Migration Management Experiments from the Ground Up (by Dr. Petra Molnar)

When Wednesday, 6 January 2021, 12:30 - 13:30
Abstract Technology is increasingly being used at the border. From drones to Big Data to algorithms, countries are turning to novel techniques to ‘manage’ migration. However, often these technological experiments do not consider the profound human rights ramifications and real impacts on human lives. Join us for a discussion of a new report, Technological Testing Grounds on interviews with refugees and people on the move and highlighting the panopticon of increasing surveillance and automation at the border.
Slides & 20210106-pmolnar-migration.pdf
Recording Technological testing Grounds: Migration Management Experiments from the Ground Up (by Dr. Petra Molnar) Discussion

Cyber security from technological research to offensive (mis)appropriation (by Erik Zouave)

When Thursday, 17 December 2020, 12:30 - 13:30
Abstract Research on cyber technology is largely undertaken with the aim of increasing the security of systems, software, users, and society at large. At least that is the conclusion that might be drawn from research and research investments in the public domain. However, as a matter of law, regulators recognize that some types of technologies are inherently “dual use”. Moreover, many more cyber technologies than those explicitly regulated as “dual use” can become part and parcel of future “offensive” end-use and criminal misuse. Open AI’s cancelled release of the GPT-2 natural language model is arguably a case in point. While many researchers may be aware of an international “arms race” between states and criminal technological trade on “black markets”, insights of how security research fits into this development is more complex. This seminar aims to broaden researcher’s perspective of how cyber security technologies can have “offensive” end-uses and can be at risk of misuse. It further exemplifies formalized systems of development, dissemination and trade that may further “offensive” exploitation and sometimes result in misuse. Finally, the challenges to formulating effective legal responses to unwanted end uses of technology, such as through export controls, cybercrime law, data protection, and vulnerability equities or disclosure, is addressed.
Slides & 20201217-ezouave-offensive.pdf
Recording Cyber security from technological research to offensive (mis)appropriation (Erik Zouave) Discussion

Privacy by Design (by Dr. Laurens Sion)

When Tuesday, 17 November 2020, 12:30 - 13:30
Abstract Building software-intensive systems that respect the fundamental rights to privacy and data protection requires explicitly addressing data protection issues at the early development stages. Data Protection by Design (DPbD)—as coined by Article 25(1) of the General Data Protection Regulation (GDPR)—therefore calls for an iterative approach based on (i) the notion of risk to data subjects, (ii) a close collaboration between the involved stakeholders, and (iii) accountable decision-making.
In practice, however, the legal reasoning behind DPbD is often conducted on the basis of informal system descriptions that lack systematicity and reproducibility. This affects the quality of Data Protection Impact Assessments (DPIA)—i. e. the concrete manifestation of DPbD at the organizational level. This is a major stumbling block when it comes to conducting a comprehensive and durable assessment of the risks that takes both the legal and technical complexities into account.
In this talk, I present DPMF: a data protection modeling framework that allows for a comprehensive and accurate description of the data processing operations in terms of the key concepts used in the GDPR. The proposed modeling approach accommodates a number of legal reasonings and assessments that are commonly addressed in a DPIA exercise (e. g., the compatibility of purposes). The DPMF is supported in a prototype modeling tool and its practical applicability is validated in the context of a realistic eHealth system for a number of complementary development scenarios.
Slides & 20201117-lsion-dpbd.pdf
Recording Privacy by Design (Dr. Laurens Sion)

Compelled Decryption (by Dr. Sofie Royer and Ward Yperman)

When Thursday, 29 October 2020, 12:30 - 13:30
Abstract You have the right to remain silent! But what does that mean when you are forced to provide your smartphone password (or fingerprint) to the police? This is a question many scholars and courts alike have grappled with. The right to remain silent and not incriminate oneself is one of the basic principles of our legal system. While this right seems fairly straightforward, technological change comes with new challenges. In this talk, Sofie Royer and Ward Yperman will explain the scope of the right to silence and shed light on the rules on compelled decryption in Belgium and beyond. This includes the consequences of refusing to provide your password and the countermeasures that one can take in order to protect oneself.
Slides & 20201029-sroyer-compelled.pdf
Recording Compelled Decryption (Dr. Sofie Royer and Ward Yperman) Discussion

Watching IoTs That Watch Us: Empirically Studying IoT Security & Privacy at Scale (by Prof. Danny Y. Huang)

When Friday, 16 October 2020, 12:30 - 13:30
Abstract Consumers today are increasingly concerned about the security and privacy risks of smart home IoT devices. However, few empirical studies have looked at these problems at scale, partly because a large variety and number of smart-home IoT devices are often closed-source and on private home networks, thus making it difficult for researchers to systematically observe the actual security and privacy issues faced by users in the wild. In this talk, I describe two methods for researchers to empirically understand these risks to real end-users: (i) crowdsourcing network traffic from thousands of real smart home networks, and (ii) emulating user-inputs to study how thousands of smart TV apps track viewers. Both methods have allowed us to conduct the largest security and privacy studies on smart TVs and other IoT devices to date. Our labeled datasets have also created new opportunities for other research areas, such as machine learning, network management, and healthcare.
Slides & 20201016-dhuang-watching-iots.pdf
Recording Watching IoTs That Watch Us: Empirically Studying IoT Security & Privacy at Scale (Prof. Danny Y. Huang) Discussion

LINDDUN GO: A lightweight approach to privacy threat modeling (by Dr. Kim Wuyts)

When Tuesday, 29 September 2020, 12:30 - 13:30
Abstract In this talk, Kim will present LINDDUN GO, a toolkit for lightweight privacy threat modeling. In the first part of this talk, we will look into threat modeling in general and learn more about the LINDDUN privacy threat modeling framework. We will zoom into the main privacy threat categories encompassed in LINDDUN and walk through the LINDDUN methodology. In the second part of the talk, LINDDUN GO, the lightweight variant, will be introduced.
Slides & 20200929-kwuyts-linddun-go.pdf
Recording LINDDUN GO: A lightweight approach to privacy threat modeling (Kim Wuyts)

The full force of the state: predictive policing interventions (by Fieke Jansen)

When Monday, 17 August 2020, 12:30 - 13:30
Abstract In this seminar, Fieke will talk about the need to for contextualization when discussing the effective, fair and legitimate of predictive policing. She will start her presentation with an introduction into predictive policing, its imaginaries, uses and challenges. The seminar will then outline some of the key assumptions on which predictive policing is based. All audiences are welcome: no prior knowledge of the police or technology is required for this seminar.
Slides & 20200817-fjansen-policing.pdf
Recording The full force of the state: predictive policing interventions (Fieke Jansen) Discussion

(In-)Security of Implantable Medical Devices (by Dr. Eduard Marin Fabregas)

When Friday, 17 July 2020, 12:30 - 13:30
Abstract In this seminar, Eduard will give a talk on the security of medical devices. Eduard will begin by explaining the evolution of medical devices and how this opens the door for security attacks. Afterwards, he will talk about our experience analyzing the security of several implantable medical devices such as pacemakers, insulin pumps and neurostimulators. Finally there will be room for some legal observations by Elisabetta and Erik, the speakers of the seminar of Thursday, 16 July 2020 on “Cybersecurity of Medical Devices: Legal and Ethical Challenges”.
Slides & 20200717-emarin-medical-insec.pdf
Recording (In-)Security of Implantable Medical Devices (Eduard Marin) Discussion
When Thursday, 16 July 2020, 12:30 - 13:30
Abstract In this seminar, Elisabetta and Erik will talk about the cybersecurity of medical devices from the legal and ethical perspectives. They will start their presentation by defining a medical device and how it relates to EU cybersecurity norms stemming from different pieces of legislation. The seminar will then outline some of the most pressing ethical concerns and explain how the existing cybersecurity legal framework could mitigate these. All audiences are welcome: no prior knowledge of the law is required for this seminar.
Slides & 20200716-ebiasin-medical-legal.pdf
Recording Cybersecurity of Medical Devices: Legal and Ethical Challenges (Elisabetta Biasin and Erik Kamenjašević)

Homomorphic Encryption (by Prof. Dr. Nigel Smart)

When Tuesday, 30 June 2020, 12:30 – 13:30
Abstract In this seminar, Prof. Dr. Nigel Smart will give a talk on computing on encrypted data. He will discuss how the technology of MPC and FHE is changing the way people process data, by allowing them to compute on data whilst it is “encrypted”. He will discuss a number of use cases of the technology. The talk will not go into technical details, and thus you do not need to understand any cryptography for the talk.
Slides & 20200630-smart-homomorphic-crypto.pdf
Recording Homomorphic Encryption (Nigel Smart)

Ethical Hacking (by Dr. Sofie Royer)

When Tuesday, 05 May 2020, 12:30 – 13:30
Abstract In this seminar, Sofie Royer will give a talk on hacking and its legal implications. We’ll first have a look at the basics of criminal law and criminal procedure. What does the offence of hacking look like? Are there any European rules on this topic? From which point can someone be held criminally liable for his or her behavior? Who can decide whether a criminal investigation is initiated? What punishment does someone risk when he or she is found liable for hacking? Does ethical hacking constitute an offence in Belgium? What about other countries? In other words, in this seminar you’ll find out what’s at stake when engaging in hacking activities.
Slides 20200505-sroyer-ethical-hacking.pdf

Threat modeling: A guided tour (by Dr. Koen Yskout)

When Wednesday, 08 April 2020, 12:30 – 13:30
Abstract In this talk, we’ll take a tour through the current state of practice regarding threat modeling in software engineering. Starting from the goal of threat modeling, we’ll encounter the relation between threats and objectives, risk, design decisions, vulnerabilities, and exploits. We’ll look at a few typical threats to software systems, and practical techniques to elicit them (e.g., attack trees, attack libraries, STRIDE, and LINDDUN). We’ll finish by discussing some of the limitations of and hurdles for doing threat modeling in a modern software development lifecycle and enterprise environment.
Slides 20200408-kyskout-threat-modeling.pdf

The NIS-Directive and the Cybersecurity Act (by Stefano Fantin and Michiel Fierens)

When Tuesday, 03 March 2020, 12:30 – 13:30
Abstract The aim of this presentation is to provide some basic inputs about the most recent legislation on cybersecurity issued by the EU. More specifically, the first part of the presentation will give an overview of the policy landscape that led the law-maker to legislate back in 2015-2016. Having set up the basis (inter alia, touching upon some fundamentals of EU law), we will go through the main parts of the NIS Directive, what obligations the law entails on Member States and economic actors (including on incident reporting), and what lessons we can learn from it. The second part of the presentation will brief on the newly introduced Cybersecurity Act, which regulates on the role of ENISA and a pan-European cybersecurity certification scheme. Finally, we will provide a small overview on how a future EU certification scheme could look like on the basis of the recent ENISA-report.
Slides & 20200303-fantin-fierens-nis-cybersec.pdf
Recording NIS Directive (Stefano Fantin) Cybersecurity Act (Michiel Fierens)

Trusted Execution and how far you can trust it (by Dr. Jan Tobias Muehlberg)

When Tuesday, 07 February 2020, 12:00 – 13:00
Abstract Even the most well-tested software is not secure if it executes on untrusted infrastructure, in an untrusted operating system, or relies on third-party libraries that may contain vulnerabilities. Modern processors provide Trusted Execution Environments (TEEs) that can protect a software component from low-level attacks, allows remote parties to verify the integrity of that component, and ties execution of the component to a unique trusted element in the execution platform. In this talk Dr. Jan Tobias Muehlberg will give an introduction to software-level vulnerabilities and TEEs, and discuss the security and privacy implications of this technology.
Slides 20200207-jtmuehlberg-trusted-computing.pdf

Contact & Mailing List

Talk announcements are published on this website, on Twitter under #CIFSeminarsKUL and on the CIF-Seminars mailinglist. Email CiTiP Admin to subscribe.

The series is organised by Jan Tobias Muehlberg (DistriNet), Benedikt Gierlichs (COSIC) and Sofie Royer (CiTiP). Get in touch with us if you have questions or suggestions.