Skip to content
Snippets Groups Projects
Commit 16e5585e authored by FKHals's avatar FKHals
Browse files

Create alpha version of the report

parent a19e9488
No related branches found
No related tags found
No related merge requests found
...@@ -5,3 +5,4 @@ ...@@ -5,3 +5,4 @@
*.out *.out
*.fdb_latexmk *.fdb_latexmk
*.fls *.fls
*.lof
...@@ -95,7 +95,7 @@ Beware that most of the subtitles in the structure must be renamed to be more co ...@@ -95,7 +95,7 @@ Beware that most of the subtitles in the structure must be renamed to be more co
\section{Open questions} \section{Open questions}
\begin{enumerate} \begin{enumerate}
\item Initially your proposal was just "Low-power IoT system software" but i thought that may implicitly include the security context (considering the course title). Is the additional security focus desired? \item Initially your proposal was just "Low-power IoT system software" but i thought that may implicitly include the security context (considering the course title). Is the additional security focus and the altered title ("Securing low-power IoT system software") desired?
\item Is it a good idea to show some attack vectors on IoT system software as a motivation for the reader (without going into details) or is that too specific to be of value and should be omitted? \item Is it a good idea to show some attack vectors on IoT system software as a motivation for the reader (without going into details) or is that too specific to be of value and should be omitted?
\item Are there other aspects/approaches that i may have overlooked and that need to be discussed in the report? \item Are there other aspects/approaches that i may have overlooked and that need to be discussed in the report?
\item Should i focus on one of the approaches more than on the others (considering e.g. overlaps with the work of other students and the focus of the course)? \item Should i focus on one of the approaches more than on the others (considering e.g. overlaps with the work of other students and the focus of the course)?
......
Source diff could not be displayed: it is too large. Options to address this: view the blob.
# template for ieee from [here](https://www.ieee.org/conferences/publishing/templates.html)
LATEX = latexmk
FLAGS = -pdf
all: pdf
pdf:
$(LATEX) $(FLAGS) *.tex
clean:
rm *.aux *.log *.out *.fdb_latexmk *.fls *lof
\documentclass[conference]{IEEEtran}
\usepackage{cite}
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{algorithmic}
\usepackage{graphicx}
\usepackage{textcomp}
\usepackage{xcolor}
\usepackage{hyperref} % links to references
\usepackage{bookmark} % links to sections (requires ToC)
\usepackage{listings}
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
\newcommand{\lstil}[1]{
\lstinline[basicstyle=\ttfamily]{#1}
}
\begin{document}
\title{How to secure low-power IoT systems\\
using the example of Tock OS}
\author{\IEEEauthorblockN{Felix Korthals}
\IEEEauthorblockA{\textit{FU Berlin} \\
Berlin, Germany \\
felix.korthals@live.de}
}
\maketitle
\begin{abstract}
The goal of this report is to answer how low-power IoT system software can ensure security properties (confidentiality, integrity, availability) in consideration of the constraints in memory space, processing power, power usage and hardware components.
These security properties concern for example the isolation between different processes or the authenticity of a programs control-flow.
Especially memory protection in IoT systems may be more difficult since the hardware often does not provide a Memory Management Unit (MMU) and therefore hardware-assisted virtual addressing is not possible.
\colorbox{yellow}{[TODO...]}
\end{abstract}
\begin{IEEEkeywords}
Internet of Things (IoT), operating systems, systems security, real-time systems, embedded and cyber-physical systems
\end{IEEEkeywords}
\section{Introduction/Motivation}
\label{intro}
Many often small and low-powered systems constitute the so called Internet of Things (IoT).
Due to the widespread usage of such microcontroller-powered devices in many different and often security sensitive scenarios, securing the system software of such devices became relevant.
Most general-purpose system software assumes a plethora of different ressources from the hardware it is running on.
A fast processor with cryptographic extensions (e.g. AES, SHA1, SHA2-256)%https://www.intel.com/content/www/us/en/docs/programmable/683222/21-4/cryptographic-extensions-stratix-10-fm.html
, relatively large amounts of RAM %TODO Abbreviation?
to use for the operating system (OS) and the applications it serves and a Memory Management Unit (MMU) which achieves memory protection and thereby process isolation by implementing virtual address spaces and memory segmenation.
In contrast low-power microcontrollers (MCUs) are much more limited concerning it's hardware and power ressources and do not offer an MMU and often not even more than 64~kB of RAM.
\cite[p.~235, par.~1]{tockos}
This circumstances make the development of secure system software for low-power IoT devices much more challenging than for general-purpose computers.
The goal of this report is to answer how low-power IoT system software can ensure security properties (confidentiality, integrity, availability) in consideration of the above-mentioned constraints in memory space, processing power, power usage and hardware components.
This goal is aimed to be achieved by first non-exhaustively describing a couple of general attack vectors on IoT system software which motivates the following overview about different approaches to IoT system software security which are not necessarily mutually exclusive.
Subsequently the Tock OS is examined more closely as a concrete implementation of some of the preceding approaches.
For simplicity and conciseness purposes any system software will be called an operating system hereafter and the privilegded part of a supposed system software will be called kernel hereafter.
That does not necessarily mean the supposed system software must actually suffice any concrete definition of an OS but just that this report will assume these concepts can be used interchangeably.
\subsection{Attack vectors on IoT system software}
The attack vectors on IoT system software are numerous, therefore this section will provide some non-exhaustive examples to give the reader a basic understanding what proposed security enhancements aim to counteract:
Software vulnerabilities can be exploited either in the to e.g. create buffer overflows in applications on the system which then can be used to inject and possibly execute malicious code which could violate confidentiality, integrity and availability altogether.
Also a malicious application could be running on a platform that multiple untrusted users are allowed to use which can violate e.g. confidentiality in case of the absence of process isolation.
Lastly an attacker could also physically manipulate the device since most probably a significant number of IoT devices is not physically secured in any meaningful way.
These physical manipulations include hard-resetting the system, probing the memory bus or disconnecting memory from the processor to gain sensitive information.
\cite[p.~346, sect.~2.1]{b1}
However this report will restrict it's examination exclusively on attacks without the possibility of any physical interactions with the systems components.
%Short example, e.g. Return address corruption on the stack
\section{Approaches}
\label{sec:approaches}
The following sub-chapters will deal with the question which approaches are currently researched or even in use to improve the security of low-power system software.
These approaches for enhancing system software security are not mutually exclusive and also non-exhaustive.
The first examined approach is the usage of additional memory protection hardware (e.g. ARM Cortex-M MPU (Memory Protection Unit)) to support existing/conventional software systems.
For usage in novel system software a (mostly) memory- and type-safe programming language can also be used (in contrast to the widespread usage of the C-language).
Some operating systems like "seL4" use formal methods to verify the code of the system software against a specification.
Virtualization or containerization is also used to isolate critical applications from each other.
At last certain hardware architecture enhancements such as the usage of a "trusted execution environment" (TEE) strives to enhance system software security.
\subsection{Hardware architecture enhancements}
\label{sec:hardware_arch_enhance}
There are multiple hardware architecture enhancements concerning security in IoT devices.
The following section will go into detail about the Memory Protection Unit (MPU) and some forms of Protected Module Architectures (PMA).
\subsubsection{Memory Protection Unit}
\label{sec:mpu}
Recent low-power microcontrollers (MCUs), e.g. the 32-bit Cortex-M MCUs, in contrast to older MCUs, include an MPU.
The MPU is a programmable part of the MCU's hardware that allows priviledged software, like the kernel of an Operating System (OS), to protect chosen regions of physical memory to provide memory isolation between applications and the kernel and each other.
In case of an access violation, e.g. during an instruction fetch or data access from the processor, the MPU triggers a fault exception to enable e.g. an OS kernel to enforce the desired security properties.
\cite{mpu_about}
To detect an access violation the MPU monitors all memory access.
It is possible to define memory regions in which code can not be executed by disallowing instruction fetches altogether or in which data is read-only to prevent memory corruption.
This makes it harder for an attacker to inject and execute malicious code into memory regions that have these properties enforced by the MPU.
\cite[par.~2]{mpu_whats_new}
% source: https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/what-s-new-with-the-memory-protection-unit-mpu-in-cortex-m23-and-cortex-m33
MPUs, in contrast to MMUs do not provide support for virtual memory or memory segmentation.
\cite[p.~236, par.~2]{tockos}
\subsubsection{Protected Module Architectures}
\label{sec:pma}
\paragraph{Self-Protecting Modules}
The concept of a Self-Protecting Module (SPM) has been proposed by \cite[p.~347, sect.~3]{b1} and describes "an area of memory with a particular layout
and with particular memory protection settings" \cite[p.~347, sect.~3.1, par.~1]{b1}. %FIXME use proper citing with \textcite
The SPM consists of three sections of continuous memory segments: A protected, private section %"SSecret"
that only trusted code is able to directly access, a read-only publicly accessable section %"SPublic"
which contains data and code of the module and lastly a section %"SEntry"
that defines the entry points to the module's code by which the module can be interacted with from other modules or applications using the corresponding addresses of the entry points.
Based on this structure certain access control rules can be defined:
For example that the private section is only readable while the PC is inside the module's code in the public section or that the public section is not writeable from outside the module.
\cite[p.~347, sect.~3.1, par.~2,3]{b1}
This is called Program Counter Based Access Control (PCBAC).
\cite[p.~4, sect.~2.2, par.~2]{b2}
\paragraph{Trusted Execution Environment}
System software which especially includes operating systems (OS) might comprise of complex core components that the system must be able to trust for privileged operations.
The larger and more complex the system and it's components become, the more likely it is that potential vulnerabilities in the code of privileged parts of the system might remain undetected and could therefore be exploited by an attacker.
To restrict the access of the trusted system parts by parts of the untrusted system, a hardware isolation mechanism called the Trust Execution Environment (TEE) has been proposed.
The TEE provides an environment for safety critical application to run in separated from large attack surface that the untrusted rich OS provides. %TODO define rich OS
Therefore the confidentiality and integrity of the trusted computations inside the TEE should be guaranteed and the interface between the TEE and the rest of the system is reduced to a minimum. %FIXME The last part does not fit the first part of the sentence!
\cite[p.~130:8, sect.~3.1, par.~1,2]{b12}
% What does Trustzone do about the confused deputy problem? How can it know which untrusted program should have which capabilities?
% TODO As example: Arm TrustZone-M or RISC-V MultiZone?
Many systems can be separated into trusted code which should be safe to run since it is e.g. verified and non-trusted code which may be given by a user who might be malicious. To guarantee that the any untrusted potentially malicious code is not able to directly use trusted parts or ressources of the system it may be necessary to create an isolation between these components that feature different different safety properties. % FIXME The last part should be rewritten since something like "trust levels" or something would be more fitting than "safety properties"
Such isolation of trusted parts of the system might be able to be achieved by so called Trusted Execution Environments (TEEs).
Examples for hardware that implements TEEs is "Arm TrustZone-M" or "RISC-V MultiZone".
Arm TrustZone is hardware-based security mechanism designed i.~a. for the Cortex-M MCUs \cite[p.~130:5, sect.~2.2, par.~1]{b12} and provides the processor with two strictly separated protection domains to operate in: the secure world and the normal world.
The processor may only ever operate in one of these domains at the same time and therefore executes instructions either a secure or non-secure state.
\cite[p.~130:3, sect.~2.1, par.~2]{b12}
Non-secure software in a device with enabled Arm TrustZone can not directly access secure ressources.
Instead non-secure software must use certain "secure function entry points" \cite[p.~130:5, sect.~2.2, par.~2]{b12} to communicate with secure software.
\colorbox{yellow}{[TODO...]}
%\subsubsection*{Attack vectors}
Even though an attacker might not be able to directly access anything inside the TEE, the TEE implementations might still be susceptible to the attacks based on the confused deputy problem.
If any untrusted program can use the interface to communicate safely with the trusted part of the system, it is not explained how the capabilities of different programs to use the trusted interface are managed. %TODO See if that is actually the case!!
This could be exploited since the examined implementations ("Arm TrustZone-M" and "RISC-V MultiZone") do not cover access control mechanisms that would be able to prevent such attacks, like capability-based access control mechanisms.
\subsection{Usage of type-safe languages}
\label{sec:safelang}
Instead of achieving process isolation using virtual address spaces another approach consists of writing the kernel in a memory-safe programming language while omitting the otherwise needed hardware protection.
But the high-level languages that can be used in that approach might have some drawbacks like in the case of the Spin or the Singularity OS kernels %TODO Citation needed?
which are garbage-collected languages. The garbage collector makes the memory placement and layout in the kernel more complicated and can also cause the system to halt periodically to process accumulated garbage.
Such a eventual halt of the whole OS can be problematic if a real-time system is required to behave deterministic concerning timings.
\cite[p.~1, sect.~1, par.~1,2]{b9}
Therefore the Rust programming language has been proposed as an alternative programming language for system software because it does not rely on a garbage collector for memory management.
Instead Rust uses lifetime- and scope-tracking to decide when to deallocate values.
\cite[p.~1, sect.~1, par.~3]{b9}
Even though most of the kernel's code can be written using safe language features there are some parts of the kernel that require the usage of unsafe code.
The first category of unsafe code is the one in the included standard library of the programming language which is written by the language developers and consists of e.~g. bounds checks and type casts.
These second category are the unsafe parts in the kernel itself which include for example the context switches, system call traps and the interrupt handling.
The goal of any safety-concerned OS in a memory-safe language is to reduce these unsafe parts to a minimum.
\cite[p.~1,2, sect.~1, par.~4]{b9}
\colorbox{yellow}{[TODO...]}
"While type safety is a good property to have, it is not very strong. The kernel may still misbehave or attempt, for instance, a null pointer access." \cite[p.~218, par.~10]{b10}
\subsection{Formal verification}
\label{sec:verify}
There exist different solutions to formally verify software. This report focuses for examining this topic on two examples: The verification of the seL4 microkernel and the usage of TLA+ (extended "Temporal Logic of Actions") specifications.
Although formal verification can help to enforce rigorous security and safety properties, its actual value depends on the accuracy and correctness of the modelling process: The system that is verified must be abstractly modelled so that it can be tested against the specification. If the modelling process or parts of the model are erroneous, the guarantees may not be possible to be held. For the seL4 microkernel this concerns especially the assumption about the correctness of the compiler, the assembly code and the hardware.
Assuming that the assumptions hold, it is possible to guarantee that the seL4 microkernel will never crash and will never perform unsafe operations which makes the behaviour of the kernel predictable in every possible situation. \cite[p.~207, par.~2]{b10}
\begin{figure}[htbp]
\centerline{\includegraphics[width=\linewidth]{pictures/seL4.png}}
\caption{Abstract overview of "The refinement layers in the verification of seL4" \cite[p.~209, fig.~2]{b10}}
\label{fig:sel4}
\end{figure}
In seL4 the process of formal verification (see fig.~\ref{fig:sel4}) starts with the abstract specification of the operational model which describes the behaviour of the complete system. This includes e.g. the set of available system-calls and their abstract effects or the behaviour in case of an interrupt or fault situation but it does not include any implementation details.
\cite[p.~209, sect.~2.3, par.~3]{b10}
According to the abstract specification a prototype of the kernel written in a subset of the programming language Haskell. This prototype is used to directly generate a formal executable specification from. This is possible by confining oneself to the mentioned subset of Haskell which can be automatically translated to the language of the used theorem prover ("Isabelle/HOL" \cite[p.~209, sect.~2.3, par.~1]{b10}). \cite[p.~209, sect.~2.2, par.~2]{b10}
The Haskell prototype is therefore "an executable model and implementation of the final design" \cite[p.~209, ch.~2.2, par.~3]{b10} but the final production kernel is a manual re-implementation of that prototype in the C programming language. The use of the C implementation instead of the Haskell prototype avoids having to verify the significantly large codebase of the Haskell runtime, it avoids the overhead induced by the usage of the garbage collector of the Haskell runtime and it finally makes low-level performance optimizations possible. \cite[p.~209, sect.~2.2, par.~3]{b10}
The prototype itself is not checked against any specification but only the resulting executable specification is, which makes its translation to the executable specification a non-critical part concerning correctness.
In contrast to the abstract specification the executable specification must include all data structures and implementation details that the final C-implementation does.
\cite[p.~209, sect.~2.3, par.~6]{b10}
At last the C implementation of seL4 must be verified against the executable specification.
% TODO Should i go in that deep or am i wasting my space?
% TODO Should i remove mentioning TLA+ to not write multiple pages just about formal verification?
\cite[p.~209, sect.~2.3, par.~7]{b10}
\subsubsection*{Attack vectors}
An attacker could search for a weak point in the assumptions that the abstraction or the model of the verification needs to work.
In this case it can be those parts that are considered to be correct as preconditioned: The correctness of the compiler, the assembly code and the hardware.
If an attacker can find errors in the correctness of any of those assumptions it might be possible to cause malicious effects like memory corruption or crashes that would lie outside of the safe behaviour that the formal verification should allegedly guarantee.
\colorbox{yellow}{[TODO...]}
\subsection{Virtualization/Containerization}
Memory safety can also be achieved by using virtualization.
A virtual machine (VM) that runs on the hardware can run programs with languages of higher abstraction levels than e.g. the C language.
That could make it possible to provide safety as well as portability properties. % TODO Remove any mentions of other properties than safety/security since it does not matter in this context?
The problem with virtual machines on low-power hardware like microcontrollers is their overhead in memory and processing power since often the performance that low-level lanugages like C provide is necessary to be able to run the programs fast enough, so that e.g. the performance requirements for real-time systems are met. %TODO Citation needed?
An example for such a VM is WebAssembly (Wasm) which is is designed to isolate different processes in web browsers. \cite[p.~3, sect.~4.1, par.~2]{b13}
\colorbox{yellow}{[TODO...]}
\section{Tock OS}
\label{sec:tock}
The Tock OS aims to combine the usage of type-safe languages (see sect.~\ref{sec:safelang}) and the usage of Hardware architecture enhancements (see sect.~\ref{sec:hardware_arch_enhance}) in the form of the MPU (see sect.~\ref{sec:mpu}) to achieve it's stated objectives:
dependability, concurrency, efficiency and fault isolation.
\cite[p.~236, sect.~2.2]{tockos}
Since the focus of this report lies in the security properties of OSs for low-power systems, the following examinations will mostly concern the ojective of fault isolation.
The other objectives might also be mentioned but only in so far as they contribute to the security properties in some way.
In an OS that consists of multiple components and may even support multiple applications, memory faults may happen in one component or application which may in turn corrupt the state of other components or applications.
Let an actor be either a component or application on the examined system, for conciseness purposes.
Due to the possible memory corruption between the different actors, an isolation between them is desirable, so that an actor can fail and be properly processed by the OS kernel without impairing other actors.
Fault isolation on MCUs is typically not as easy as it is on general-purpose computers mostly due to hardware constraints, as described in section~\ref{intro}.
The following part of this section covers how Tock OS aims to achieve the fault isolation and other security properties "by leveraging recent advances in microcontrollers and programming languages".
\cite[p.~237, sect.~2.2, par.~5]{tockos}
\subsection{Architecture}
\begin{figure}[htbp]
\centerline{\includegraphics[width=\linewidth]{pictures/tock_arch.png}}
\caption{"Tock system architecture." Processes can use system-calls (purple lines) to communicate with the kernel. The kernel consists of trusted and untrusted capsules (blue) which communicate with each other and with the hardware according to individual access permissions. \cite[p.~237, fig.~2]{tockos}}
\label{fig:tock_arch}
\end{figure}
Tock OS has an architecture (see fig.~\ref{fig:tock_arch}) that divides the it's software in two classes: capsules and processes.
Capsules are software modules that are part of the kernel which are secured at compile time by the constraints that the programming language enforces (see sect.~\ref{sec:safelang}) and therefore can not be loaded at runtime \cite[p.~238, sect.~3.2, par.~3]{tockos}.
Processes in contrast can be loaded at runtime and are analogous to the prevalent concept of processes in other OSs: they are isolated from each other with hardware-support using the MPU (see sect.~\ref{sec:mpu}) and can only interact with the kernel using system calls.
Capsules are scheduled cooperatively and therefore can not interrupt each other while processes are scheduled preemptively and therefore can be interrupted causing a context switch if required.
\cite[p.~237, sect.~3]{tockos}
\subsubsection{Capsules}
\label{sec:capsule}
%TODO Add that capsules share a stack
To be able to enforce certain constraints in the capsules, as components of the kernel, it is necessary for a capsule to be written in a memory-safe and type-safe language to prevent issues, like buffer overflows and use-after-free, prevalent in unsafe, low-level languages like C.
Due to capsules being scheduled cooperatively, they must be trusted to ensure to not limit the capabilities of other capsules, concerning ressource and timing requirements. A malicious capsule could for example starve other capsules of processing time because they can not interrupt each other.
\cite[p.~238, sect.~3.2, par.~1,2]{tockos}
However the Tock threat model \cite[p.~237, sect.~3.1]{tockos} assumes that capsules, for example drivers for third party hardware components supplied by the vendors, will be audited before being compiled into the kernel.
In the case that bugs or malicious parts of capsules are not identified by the audit, Tock still aims to guarantee the confidentiality and integrity of the other capsules even if availability is may not be guaranteed.
Therefore a malicious or erroneous capsule might block ressources or restart the system but it can not aquire access permissions for memory or hardware that it is not supposed to have access to.
\cite[p.~237, sect.~3.1, par.~3]{tockos}
But different capsules in Tock OS may require different levels of trust: core kernel capsules, e.g. the process scheduler, must be able to directly manipulate CPU registers which requires a high level of trust, while on the other hand some peripheral drivers which may only use hardware-agnostic interfaces to communicate with the peripherals could be treated as fully untrusted without limiting functionality.
The capsules that require a high level of trust by being able to perform actions outside the Rust type system by directly interacting with the hardware %like casting memory mapped registers to type-safe structs
are limited to a small number of capsules.
By limiting the number of trusted capsules and building other more complex capsules with the abstractions provided by those fewer trusted capsules, Tock OS aims to maintain it's confidentiality and integrity goals.
\cite[p.~238, sect.~3.2.1]{tockos}
%TODO 3.2.2 Capsule Isolation? \cite[p.~238, sect.~3.2.2]{tockos}
\subsubsection{Processes}
\label{sec:process}
In contrast to capsules which are isolated due to the memory- and type-safety properties of the Rust language, processes in Tock OS are hardware-solated.
Every process has it's own independent memory region with a separate stack, heap and static variables which enables preemptive scheduling, to make long-running computations interruptable and possibly provide a fair distribution of computing time between processes if that is desired (Tock OS uses a round-robin scheduler \cite[p.~239, sect.~3.3, par.~1]{tockos}).
Even though the MPU can provide memory-isolation it does not enable virtual memory so processes are not provided with the illusion of infinite memory and all used addresses in the system are absolute, physical ones (see sect.~\ref{sec:mpu}).
Also no code in form of shared libraries is shared between different processes.
Due to the hardware-isolation, the programming language used in the processes is not limited to Rust but can also be C or other languages without compromising the stated safety properties (see sect.~\ref{sec:tock}).
Processes interact with the kernel using a system-call interface and between each other using an Inter Process Communication (IPC) implementation.
\cite[p.~239, sect.~3.3, par.~1,2]{tockos}
In the current state low-latency direct hardware access for processes, for performance reasons, is considered but not implemented due to possible side-channel memory access through DMA registers which could violate confidentiality as well as integrity goals.
\cite[p.~239, sect.~3.3, par.~3]{tockos}
\subsubsection{Kernel interface}
The interface for applications to communicate with the kernel in Tock OS consists of five systems calls: \lstil{command}, \lstil{allow}, \lstil{subscribe}, \lstil{memop} and \lstil{yield} \cite[p.~240, tab.~3]{tockos}.
It is geared to event-driven systems and therefore any system call is non-blocking \cite[p.~239, sect.~3.3, par.~2]{tockos}, in contrast to other system call interfaces, such as in Linux.
Using the \lstil{command} call processes can make requests to capsules with integer arguments, e.g. to configure timers.
The \lstil{allow} call allows a process to pass more complex data to a capsule using a data buffer, which gets verified by the kernel by checking if the specified pointer and length of the buffer lies within the process's memory. %TODO analogous to Self-Protecting Modules?
The \lstil{subscribe} call can be used by a process to create a callback for a request to a specific capsule, for example a notification for a network packet arrival.
The callback will be invoked by the capsule as soon as the packet arrived and the corresponding process can then \lstil{yield} the result which blocks the process until the callback has been processed.
At last \lstil{memop} can be used to increase the heap size of a process.
\cite[p.~239,240, sect.~3.4]{tockos}
\subsection{Grants}
\label{sec:grants}
To be able to for example answer a request from a process, a capsule might need to dynamically allocate memory for the response.
Tock uses so called grants to solve this problem. A grant is a granted separate section of memory, including and API to access it, in the memory space of each process that the kernel can use as a heap for operations concerning the corresponding process.
Since that kernel heap is therefore distributed over the memory of the processes, grant allocation do not interfere with each other since they do not share the memory region.
If a process fails the ressources of the grant can be freed easily since they are bound to the process instead of the kernel.
A capsule can create a grant as being of any abritrary type or data structure.
Grants do not initially allocate memory: It is only conceptual until a process actually makes use of it by calling a certain method on the grant (\lstil{enter}).
To preserve safety when using grants the kernel enforces that memory allocated by capsules does not break the type system, that grants can only be used as long as the corresponding process is still alive and finally that the grant memory can be freed if the corresponding process terminates.
\cite[p.~240, sect.~4]{tockos}
Tock makes further use of the MPU to enforce that a process can not directly access the grant memory in it's memory region, otherwise the process could directly read or write kernel data structures and violate the guarantees that the Rust language provides for the kernel.
\cite[p.~240, sect.~4.1]{tockos}
To enforce that a capsule can not access grant memory of dead processes, Tock enforces that the kernel never runs in parallel with the processes and additionally always checks the validity of the process identifier that the grant memory that is going to be accessed belongs to.
\cite[p.~240, sect.~4.2]{tockos}
\subsection{MPU usage in Tock}
As previously mentioned Tock makes use of the MPU (of the ARM Cortex-M) to implement the kernel and process isolation.
Concerning processes the MPU is configured to allow every process to access only it's own code segment in the flash memory and it's data, stack and heap in the RAM.
The grant region in the process's memory space is explicitly disallowed for the process itself to access, as mentioned in section~\ref{sec:grants}.
To implement IPC the MPU might enable the sharing of memory blocks between different processes.
In contrast to the processes, the MPU is disabled, as long as the kernel is active since the security properties of the kernel and it's capsules are enforced by the Rust type-system and the capsules share a stack and the memory region (see sect.~\ref{sec:capsule}).
%\subsection{Evalutation} %TODO of Tock OS against what? by which measure? Maybe in compairison to the other approaches? How they could be used to enhance the security properties even more?
\section{Evalutation}
The following part is an evaluation of Tock OS by the author, considering the other mentioned approaches: \colorbox{yellow}{[Comment:} The following part is just an evaluation idea, this might be thrown away if it is deemed superfluous/wasteful/subpar but i thought i might add some bit more of a personal contribution, even though it is not explicitly part of the main question of the paper]
To answer the question if Tock OS could benefit from the other mentioned approaches in section~\ref{sec:approaches}, it is necessary to first evaluate which of the approaches Tock already uses.
Concerning hardware architecture enhancements (see sect.~\ref{sec:hardware_arch_enhance}) Tock already uses the MPU, although the usage of one of the PMA (see sect.~\ref{sec:pma}) approaches might be beneficial to the security properties of especially the trusted capsules in the kernel (see sect.~\ref{sec:capsule}) since the MPU might not be as powerful of an isolation mechanism as for examble a TEE.
Since Tock already uses Rust for the kernel, the usage of a type-safe language (see sect.~\ref{sec:safelang}) is given.
Formal verification (see sect.~\ref{sec:verify}) on the other hand is currently not applied in any form.
This might not be especially beneficial for the implementation of the processes or the trusted modules, since these are secured using hardware and the features of the Rust language, assuming that the implementation of the hardware and the Rust compiler is correct.
But since the kernel's core and the trusted capsules have to use parts of unsafe code to function, especially when directly manipulating the hardware, formal verification of these trusted und partly unsafe parts of the OS could be used to identify critical bugs and design flaws.
%TODO What about virtualization?
\section{Prospects}
%TODO Search for other upcoming hardware enhancements etc. otherwise remove the section!
\colorbox{yellow}{[TODO?]}
\begin{thebibliography}{00}
%\subsection*{\upshape Given Sources:}
\bibitem{b1} Strackx, Raoul \& Piessens, Frank \& Preneel, Bart. (2010). Efficient Isolation of Trusted Subsystems in Embedded Systems. Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering. 50. 344-361. 10.1007/978-3-642-16161-2\_20.
\bibitem{b2} Mühlberg, J.T., Noorman, J., Piessens, F. (2015). Lightweight and Flexible Trust Assessment Modules for the Internet of Things. In: Pernul, G., Y A Ryan, P., Weippl, E. (eds) Computer Security -- ESORICS 2015. ESORICS 2015. Lecture Notes in Computer Science(), vol 9326. Springer, Cham. https://doi.org/10.1007/978-3-319-24174-6\_26
\bibitem{b3} E. Baccelli et al., "RIOT: An Open Source Operating System for Low-End Embedded Devices in the IoT," in IEEE Internet of Things Journal, vol. 5, no. 6, pp. 4428-4440, Dec. 2018, doi: 10.1109/JIOT.2018.2815038.
\bibitem{tockos} Levy, Amit \& Campbell, Bradford \& Ghena, Branden \& Giffin, Daniel \& Pannuto, Pat \& Dutta, Prabal \& Levis, Philip. (2017). Multiprogramming a 64kB Computer Safely and Efficiently. 234-251. 10.1145/3132747.3132786.
%\subsection*{\upshape Other sources:}
\bibitem{b5} Erlingsson, Úlfar. (2007). Low-Level Software Security: Attacks and Defenses. 92-134. 10.1007/978-3-540-74810-6\_4.
\bibitem{b6} Clercq, Ruan \& Piessens, Frank \& Schellekens, Dries \& Verbauwhede, Ingrid. (2014). Secure interrupts on low-end microcontrollers. 147-152. 10.1109/ASAP.2014.6868649.
\bibitem{b7} A. Dunkels, B. Gronvall and T. Voigt, "Contiki - a lightweight and flexible operating system for tiny networked sensors," 29th Annual IEEE International Conference on Local Computer Networks, 2004, pp. 455-462, doi: 10.1109/LCN.2004.38.
\bibitem{b8} Banegas, Gustavo \& Zandberg, Koen \& Herrmann, Adrian \& Baccelli, Emmanuel \& Smith, Benjamin. (2021). Quantum-Resistant Security for Software Updates on Low-power Networked Embedded Devices.
% Safe languages
\bibitem{b9} Amit Levy, Bradford Campbell, Branden Ghena, Pat Pannuto, Prabal Dutta, and Philip Levis. 2017. The Case for Writing a Kernel in Rust. In Proceedings of the 8th Asia-Pacific Workshop on Systems (APSys '17). Association for Computing Machinery, New York, NY, USA, Article 1, 1–7. https://doi.org/10.1145/3124680.3124717
% Formal verification
\bibitem{b13} Rysavy, Ondrej \& Rab, Jaroslav. (2008). A component-based approach to verification of embedded control systems using TLA. Proceedings of the International Multiconference on Computer Science and Information Technology, IMCSIT 2008. 3. 719-725. 10.1109/IMCSIT.2008.4747321.
\bibitem{b10} Gerwin Klein, June Andronick, Kevin Elphinstone, Gernot Heiser, David Cock, Philip Derrin, Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael Norrish, Thomas Sewell, Harvey Tuch, and Simon Winwood. 2010. SeL4: formal verification of an operating-system kernel. Commun. ACM 53, 6 (June 2010), 107–115. https://doi.org/10.1145/1743546.1743574
% Virtualization/Containerization
\bibitem{b11} Robbert Gurdeep Singh and Christophe Scholliers. 2019. WARDuino: a dynamic WebAssembly virtual machine for programming microcontrollers. In Proceedings of the 16th ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes (MPLR 2019). Association for Computing Machinery, New York, NY, USA, 27–36. https://doi.org/10.1145/3357390.3361029
\bibitem{b13} Zandberg, Koen \& Baccelli, Emmanuel. (2021). Femto-Containers: DevOps on Microcontrollers with Lightweight Virtualization \& Isolation for IoT Software Modules.
% Hardware architecture enhancements
\bibitem{b12} Pinto, Sandro \& Santos, Nuno. (2019). Demystifying Arm TrustZone: A Comprehensive Survey. ACM Computing Surveys. 51. 1-36. 10.1145/3291047.
\bibitem{mpu_about} Arm Limited (n.~d.). About the MPU. In Memory Protection Unit (MPU) Version 1.0. URL: \url{https://developer.arm.com/documentation/100699/0100/Introduction/About-the-MPU} (visited on 07-07-2022)
\bibitem{mpu_whats_new} Tim Menasveta (2016). What’s new with the Memory Protection Unit (MPU) in Cortex-M23 and Cortex-M33?. URL: \url{https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/what-s-new-with-the-memory-protection-unit-mpu-in-cortex-m23-and-cortex-m33} (visited on 07-07-2022)
\end{thebibliography}
\listoffigures
\end{document}
report/pictures/seL4.png

43.5 KiB

report/pictures/tock_arch.png

103 KiB

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment