Making mobile and embedded AI systems secure and trustworthy (2024)

Harnessing the potential of artificial intelligence (AI) is key to the future viability of companies. However, they often have limited knowledge of how to use AI in a secure way. The SENSIBLE-KI (sensitive AI) project, funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK), has brought together science and industry to analyze the security and trustworthiness of mobile and embedded systems using the example of Android systems. The key findings and results of the project are as follows: Software-based security measures can be used effectively today. Intensive research and work is being carried out on hardware-based measures, but their use is currently restricted. Two biometric demonstrators are used to show the practical use of security technologies.

AI methods are used in a variety of applications, for example in biometric verification processes such as video conferencing systems or gait authentication. AI is also being implemented more and more directly on end devices, such as in the case of voice-based identification on smartphones. However, there is currently no standardized approach to securing AI systems in mobile and embedded systems, which can lead to significant security vulnerabilities.

Focus on Android systems and hardware

Over the past three years, the SENSIBLE-KI research project has been looking at how AI systems in mobile applications and embedded systems can be designed in a secure and trustworthy way. The focus was on Android systems and hardware as well as the protection of highly confidential data (e.g., patient data or trade secrets).

The following key findings were identified from the research:

  • AI algorithms should be resistant against manipulation and ensure data protection. In order to guarantee both of these things, it is highly important and necessary to balance application-specific software-based measures and adjustments to the AI learning algorithms. Software-based measures can be used in the training phase of the AI model used, regardless of the platform, to strengthen robustness and privacy.
  • Hardware-based measures are almost impossible to implement, especially on mobile devices, due to restrictive programming interfaces (application programming interface/API) and the resulting reduced functionality of trusted execution environments (TEEs)* and tamper resistant hardware.
  • Executing an AI application entirely in a secure environment (TEE) would cover all the security goals considered in the project: The application would be better protected against manipulation and illegitimate access to the model content or its output data. However, this measure is still difficult to implement in practice, partly due to a lack of resources in TEEs, and is currently only feasible on embedded platforms. Nevertheless, new research approaches are being developed and hardware manufacturers have also recognized the demand and are developing solutions that could facilitate implementation in the future.

“We have classified AI systems and determined their protection requirements and identified suitable protective measures for characteristic AI use cases. Above all, we now have a better understanding of which security measures can be implemented for AI on Android platforms. Companies can benefit from these findings, for example in developing their own secure solutions,” says Kinga Wróblewska-Augustin, project manager at the Fraunhofer Institute for Applied and Integrated Security AISEC.

More robust, real-time detection of deepfake attacks in video conferences

The methods and approaches developed as part of SENSIBLE-KI were implemented in two demonstrators by the industrial partners with the support of the participating research institutes.

The research prototype for the real-time detection of deepfake attacks in video conferences is based on Self-ID, an innovative technology from Bundesdruckerei that uses visual self-recognition as a biometric identification mechanism. Quality checks of the input data and “adversarial retraining”* have improved the robustness of the AI system against targeted manipulation. The filtering of personal data that could be used to determine the training data of the AI model increased privacy protection.

Differential privacy* was also tested within the Self-ID demonstrator to determine the impacts of including noise— obscuring information about particular training data— on the quality of predicting whether the person being observed saw themselves, a stranger or a deepfake. Noise inevitably reduces the accuracy of the model. Specifically, self-recognition or recognition of other people became less accurate. In the study, the correct amount of noise required was determined without reducing the accuracy beyond what was necessary.

Reliable detection of gait patterns for access control

“SeamlessMe” is an access control system that carries out authentication based on the gait pattern. The algorithm recognizes individual gait patterns (one class classification) and deviations (novelty detection). This enables it to create individual user confidence levels based on the gait of the respective user. The user is recognized by their gait.

Making mobile and embedded AI systems secure and trustworthy (2024)

References

Top Articles
Latest Posts
Article information

Author: Ray Christiansen

Last Updated:

Views: 6040

Rating: 4.9 / 5 (69 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Ray Christiansen

Birthday: 1998-05-04

Address: Apt. 814 34339 Sauer Islands, Hirtheville, GA 02446-8771

Phone: +337636892828

Job: Lead Hospitality Designer

Hobby: Urban exploration, Tai chi, Lockpicking, Fashion, Gunsmithing, Pottery, Geocaching

Introduction: My name is Ray Christiansen, I am a fair, good, cute, gentle, vast, glamorous, excited person who loves writing and wants to share my knowledge and understanding with you.