diff --git a/report/iot_sec_and_priv_report_korthals.tex b/report/iot_sec_and_priv_report_korthals.tex
index 86a59e99532021fd6d00432e98bcc063b1c5d573..5adc6f352dc2183448db7def202e2d8b7ae2d094 100644
--- a/report/iot_sec_and_priv_report_korthals.tex
+++ b/report/iot_sec_and_priv_report_korthals.tex
@@ -34,12 +34,12 @@ The goal of this report is to answer how low-power IoT system software can ensur
 These security properties concern for example the isolation between different processes or the authenticity of a programs control-flow.
 Especially memory protection in IoT systems may be more difficult since the hardware often does not provide a Memory Management Unit (MMU) and therefore hardware-assisted virtual addressing is not possible.
 
-To answer the question this report provides an non-exhausting overview of different approaches and techniques which aim to achieve the stated goal and are either already actively used or still a promising subject of research.
-Different hardware achitecture enhancements are introduced, including the Memory Protection Unit (MPU) and Protected Module Architectures.
-The question how the usage of type-safe programming languages unlike C/C++ might help to increase security is discussed briefly.
-It is considered how formal verification is used in security focused system software like seL4.
-Also the concept and properties of Control-Flow Integrity (CFI) is briefly introduced, followed by the benefits that virtualization might have for security purposes.
-Finally a case study on the subject of Tock OS is conducted which uses a novel approach combining the usage of the type-safe programming language rust, the MPU and a security focused architecture to provide heightened security guarantees compared to traditional system software in the low-power IoT domain.
+To answer its question this report provides an non-exhausting overview of different approaches and techniques which aim to achieve the stated goal and are either already actively used or still a promising subject of research.
+Different hardware architecture enhancements are introduced, including the Memory Protection Unit (MPU) and Protected Module Architectures (PMA).
+The question how the usage of memory- and type-safe programming languages unlike C/C++ might help to increase security is discussed briefly.
+It is considered how formal verification is used in security focused system software like the seL4 microkernel.
+Also the concept and properties of Control-Flow Integrity (CFI) are briefly introduced, followed by the benefits that virtualization might have for security purposes.
+Finally a case study on the subject of Tock OS is conducted which uses a novel approach combining the usage of the memory- and type-safe programming language Rust, the MPU and a security focused architecture to provide heightened security guarantees compared to traditional system software in the low-power IoT domain.
 \end{abstract}
 
 \begin{IEEEkeywords}
@@ -54,68 +54,68 @@ Internet of Things (IoT), operating systems, systems security, real-time systems
 Many often small and low-powered systems constitute the so called Internet of Things (IoT).
 Due to the widespread usage of such microcontroller-powered devices in many different and often security sensitive scenarios, securing the system software of such devices became relevant.
 
-Most general-purpose system software assumes a plethora of different ressources from the hardware it is running on.
-A fast processor with cryptographic extensions (e.g. AES \cite[p.~1]{aes_extension}), relatively large amounts of RAM to use for the operating system (OS) and the applications it serves and a Memory Management Unit (MMU) which achieves memory protection and thereby process isolation by implementing virtual address spaces and memory segmenation.
-In contrast low-power microcontrollers (MCUs) are much more limited concerning it's hardware and power ressources and do not offer an MMU and often not even more than 64~kB of RAM.
+Most general-purpose system software assumes a plethora of different resources from the hardware it is running on.
+A fast processor with cryptographic extensions (e.~g. AES \cite[p.~1]{aes_extension}), relatively large amounts of RAM to use for the operating system (OS) and the applications it serves and a Memory Management Unit (MMU) which achieves memory protection and thereby process isolation by implementing virtual address spaces and memory segmentation.
+In contrast low-power microcontrollers (MCUs) are much more limited concerning it's hardware and power resources and do not offer an MMU and often not even more than 64~kB of RAM.
 \cite[p.~234,235, sect.~1, par.~2]{tockos}
-This circumstances make the development of secure system software for low-power IoT devices much more challenging than for general-purpose computers.
+This circumstances makes the development of secure system software for low-power IoT devices much more challenging than for general-purpose computers.
 
-Due to the missing virtual address spaces and memory segmenation low-power IoT systems are often constrainted operate with the applications and the OS in the same memory region which makes sharing pointers easy and efficient but also carries substantial security risks.
+Due to the missing memory virtualization and and segmentation low-power IoT systems are often constrained to operate with the applications and the OS in the same memory region which makes sharing pointers easy and efficient but also carries substantial security risks especially concerning confidentiality and integrity.
 The missing memory isolation requires all code in an unrestricted multiprogramming environment to be trusted to not be malicious or misbehave.
 Controversially, IoT devices are often required to run fault free for long periods of time which often requires static memory allocation for most tasks to avoid memory exhaustion errors.
-In such a ressource constrained system, static memory allocation makes it even more important to find the right balance for an applications memory usage such as when deciding how many concurrent requests a service can process since it can not dynamically changed at runtime.
+In such a resource constrained system, static memory allocation makes it even more important to find the right balance for an applications memory usage such as when deciding how many concurrent requests a service can process since it can not be dynamically changed at runtime.
 \cite[p.~234,235, sect.~1, par.~4-7]{tockos}
 
 The goal of this report is to answer how low-power IoT system software can ensure security properties (confidentiality, integrity, availability) in consideration of the above-mentioned constraints in memory space, processing power, power usage and hardware components.
-This goal is aimed to be achieved by first non-exhaustively describing a couple of general attack vectors on IoT system software which motivates the following overview about different approaches to IoT system software security which are not necessarily mutually exclusive.
+This goal is aimed to be achieved by first briefly describing a couple of general attack vectors on IoT system software which motivates the following overview of different approaches to IoT system software security which are not necessarily mutually exclusive.
 Subsequently the Tock OS is examined more closely as a concrete implementation of some of the preceding approaches.
 
-For simplicity and conciseness purposes any system software will be called an operating system and the privileged part of a supposed system software will be called kernel hereafter.
+For simplicity and conciseness purposes any system software will be called an operating system and the privileged part of a supposed system software will be called a kernel hereafter.
 That does not necessarily mean the supposed system software must actually suffice any concrete definition of an OS but just that this report will assume these concepts can be used interchangeably.
 
 \subsection{Attack vectors on IoT system software}
 
-The attack vectors on IoT system software are numerous, therefore this section will provide some non-exhaustive examples to give the reader a basic understanding what proposed security enhancements aim to counteract:
+The attack vectors on IoT system software are numerous, therefore this section will provide some non-exhaustive examples to give the reader a basic understanding what the proposed security enhancements aim to counteract:
 
-Software vulnerabilities can be exploited either in the  to e.g. create buffer overflows in applications on the system which then can be used to inject and possibly execute malicious code which could violate confidentiality, integrity and availability altogether.
-Also a malicious application could be running on a platform that multiple untrusted users are allowed to use which can violate e.g. confidentiality in case of the absence of process isolation.
+Software vulnerabilities can be exploited to e.~g. create buffer overflows in applications on the system which then can be used to inject and possibly execute malicious code which could violate confidentiality, integrity and availability altogether.
+Also a malicious application could be running on a platform that multiple untrusted users are allowed to use which can violate for example confidentiality in case of the absence of process isolation.
 Lastly an attacker could also physically manipulate the device since most probably a significant number of IoT devices is not physically secured in any meaningful way.
 These physical manipulations include hard-resetting the system, probing the memory bus or disconnecting memory from the processor to gain sensitive information.
 \cite[p.~346, sect.~2.1]{spm}
 
-However this report will restrict it's examination exclusively on attacks without the possibility of any physical interactions with the systems components.
-%Short example, e.g. Return address corruption on the stack
+However this report will restrict it's examinations exclusively on attacks without the possibility of any physical interactions with the systems components.
+%Short example, e.~g. Return address corruption on the stack
 
 \subsection{Threat model}
 \label{sec:threat_model}
 
 If not stated otherwise, we will make use of a general model of an attacker which is defined by its capabilities.
-For this report we will assume an attacker that can arbitrarily write to memory at certain points in time during the execution of the software that is attacked.
+For this report we will assume an attacker which can arbitrarily write to memory at certain points in time during the execution of the software that is attacked.
 Further we assume that the attacker can not execute data or write to code segments in memory as enforced for example by a correspondingly configured Memory Protection Unit (MPU). %TODO introduce abbreviation again after abstract? But abstract is not really part of the text, is it?
 As previously mentioned the attacker is explicitly not able to perform any kind of physical attacks on the system the considered system software is executed on because most approaches disregard such attacks as they might require different security measures which would go beyond the scope of this report even though it is an important subject especially for IoT devices.
 
-Assuming such an attacker is realistic since exploitation of memory corruption vulnerabilities can in general achieve some forms of write access to memory even though in practice this access can be severely constrained somehow if certain security policies are enforced but a defender might not always be able to rely uppon such guarantees as will be shown in later sections.
+Assuming such an attacker is realistic since exploitation of memory corruption vulnerabilities can in general achieve some forms of write access to memory even though in practice this access can be severely constrained somehow if certain security policies are enforced but a defender might not always be able to rely upon such guarantees as will be shown in later sections.
 \cite[p.~3, sect.~3]{bending}
 
-The top goal of an attacker could be arbitrary code execution which means the attacker can invoke arbitrary system calls with arbitrary parameters to exercise all permissions the exploited software might have.
+The top goal of an attacker could be the arbitrary code execution which means that the attacker can invoke arbitrary system calls with arbitrary parameters to exercise all permissions the exploited software might have.
 Otherwise an attacker could also try to achieve confined code execution which makes it possible for the attacker to execute arbitrary code inside the memory space of the exploited software excluding the ability to invoke arbitrary system calls.
 Lastly the goal of an attacker could also be to leak information in form of arbitrary values from memory.
 \cite[p.~3, sect.~3]{bending}
 
 
-%TODO Threat Model? or threat model of every individual aproach?
+%TODO Threat Model? or threat model of every individual approach?
 
 \section{Approaches}
 \label{sec:approaches}
 
-The following sub-chapters will deal with the question which approaches are currently researched or even in use to improve the security of low-power system oftware.
+The following sub-chapters will deal with the question which approaches are currently researched or even in use to improve the security of low-power IoT system software.
 These approaches for enhancing system software security are not mutually exclusive and also non-exhaustive.
 
 The first examined approach is the introduction of hardware architecture enhancements to support existing/conventional software systems which in this report will cover the usage of additional memory protection hardware like the MPU and the Protected Module Architectures (PMA) which includes Trust Execution Environments (TEE).
-For usage in novel system software a (mostly) memory- and type-safe programming language can also be used (in contrast to the widespread usage of the C-language).
+For usage in novel system software a (mostly) memory- and type-safe programming language can also be used (in contrast to the widespread usage of the C language).
 Some operating systems like seL4 use formal methods to verify the code of the system software against a specification.
-The concept and properties of Control-Flow Integrity (CFI) is briefly introduced.
-At last virtualization is also used to isolate critical applications from each other also using CFI and other techniques.
+The concept and properties of Control-Flow Integrity (CFI) are briefly introduced.
+At last virtualization is also used to isolate critical applications from the underlying system also using CFI and other techniques.
 
 \subsection{Hardware architecture enhancements}
 \label{sec:hardware_arch_enhance}
@@ -126,14 +126,14 @@ The following section will go into detail about the MPU and some forms of PMA.
 \subsubsection{Memory Protection Unit}
 \label{sec:mpu}
 
-Recent low-power microcontrollers (MCUs), e.g. the 32-bit Cortex-M MCUs, in contrast to older MCUs, include an MPU.
+Recent low-power microcontrollers (MCUs), e.~g. the 32-bit Cortex-M MCUs, in contrast to older MCUs, include an MPU.
 The MPU is a programmable part of the MCU's hardware that allows privileged software, like the kernel of an Operating System (OS), to protect chosen regions of physical memory to provide memory isolation between applications and the kernel and each other \cite{mpu_about}.
-Using the MPU, the physical memory of the MCU ist partitioned into a number of regions which can have individual access permissions (read, write, execute) assigned to them.
+Using the MPU, the physical memory of the MCU is partitioned into a number of regions which can have individual access permissions (read, write, execute) assigned to them.
 The data about these access permissions resides in local registers of the MPU instead of the main memory and can only be changed by privileged software which in turn is defined by the MPU.
 \cite[p.~3,4, sect.~3.2, par.~3]{trustlite}
 Depending on the processor's implementation the MPU can separate the memory into up to 16 regions.
-In case of an access violation, e.g. during an instruction fetch or data access from the processor, the MPU triggers a fault exception to enable e.g. an OS kernel to enforce the desired security properties \cite{mpu_about} (for example using designated exception handlers).
-To detect an access violation the MPU monitors all memory access.
+In case of an access violation, e.~g. during an instruction fetch or data access from the processor, the MPU triggers a fault exception to enable e.~g. an OS kernel to enforce the desired security properties \cite{mpu_about} (for example using designated exception handlers).
+To detect an access violation the MPU monitors all memory accesses.
 It is possible to define memory regions in which code can not be executed by disallowing instruction fetches altogether or in which data is read-only to prevent memory corruption.
 This makes it harder for an attacker to inject and execute malicious code into memory regions that have these properties enforced by the MPU.
 \cite[par.~2]{mpu_whats_new}
@@ -141,14 +141,14 @@ MPUs, in contrast to MMUs do not provide support for virtual memory or memory se
 \cite[p.~236, par.~2]{tockos}
 
 However the security guarantees of an MPU might greatly depend on the actual implementation which could make additional security measures necessary.
-For example in FreeRTOS the memory isolation can be bypassed using a control-flow hijacking attack by abusing the fact that software needs to temporarily raise its priviledge to use the kernel API (which is done using the function \lstil{xPortRaisePrivilege}).
-An attacker could easily locate this raise-priviledge function in the static compiled firmware by reverse-engineering, exploit a memory error in the unprivileged task he has access to and invoke the raise-priviledge function to achieve priviledge escalation.
+For example in FreeRTOS the memory isolation can be bypassed using a control-flow hijacking attack by abusing the fact that software needs to temporarily raise its privilege to use the kernel API (which is done using the function \lstil{xPortRaisePrivilege}).
+An attacker could easily locate this raise-privilege function in the static compiled firmware by reverse-engineering, exploit a memory error in the unprivileged task he has access to and invoke the raise-privilege function to achieve privilege escalation.
 \cite[p.~2,3, sect.~4, par.~2]{mpu_bad}
 % 1. Find a stack overflow bug
-% 2. Overwrite original return address on the stack by the address of the raise-priviledge function
-% 3. Priviledges raised!
+% 2. Overwrite original return address on the stack by the address of the raise-privilege function
+% 3. Privileges raised!
 
-To prevent an attack from calling arbitrary functions in the same priviledge mode one better approach might be to design an execution-aware MPU which not only distinguishes between privileged and unprivileged access but additionally considers the source of the access request by using the current value of the program counter (PC).
+To prevent an attack from calling arbitrary functions in the same privilege mode one better approach might be to design an execution-aware MPU which not only distinguishes between privileged and unprivileged access but additionally considers the source of the access request by using the current value of the program counter (PC).
 as proposed by the trustlite security architecture \cite[p.~4, sect.~3.2.1, par.~2]{trustlite}
 
 \subsubsection{Protected Module Architectures}
@@ -159,7 +159,7 @@ as proposed by the trustlite security architecture \cite[p.~4, sect.~3.2.1, par.
 The concept of a Self-Protecting Module (SPM) \cite[p.~347, sect.~3]{spm} describes "an area of memory with a particular layout
 and with particular memory protection settings" \cite[p.~347, sect.~3.1, par.~1]{spm}.
 The SPM consists of three sections of continuous memory segments (see fig.~\ref{fig:spm}): A protected, private section "SSecret"
-that only trusted code is able to directly access, a read-only publicly accessable section "SPublic"
+that only trusted code is able to directly access, a read-only publicly accessible section "SPublic"
 which contains data and code of the module and lastly a section "SEntry"
 that defines the entry points to the module's code by which the module can be interacted with from other modules or applications using the corresponding addresses of the entry points.
 
@@ -178,11 +178,11 @@ This is called Program Counter Based Access Control (PCBAC).
 Before the described structure of an SPM can provide any security guarantees it needs to be created and initialized.
 The OS loads the sections SPublic and SEntry into memory first.
 This loading process is considered untrusted since it might be interfered with by an attacker.
-Therefore the SPM gets validated later on for authenticity by considering hashes of both sections (SPublic, SEntry), the layout of the SPM and a cryptographic signature signing these informations all of which together are called the "security report". % signed by the issuer of the module
-The signature of the report itself can then be validated using a certificate authority (CA) and corresponding a chain of trust.
+Therefore the SPM gets validated later on for authenticity by considering hashes of both sections (SPublic, SEntry), the layout of the SPM and a cryptographic signature signing these informations, which all together are called the "security report". % signed by the issuer of the module
+The signature of the report itself can then be validated using a certificate authority (CA) and a corresponding chain of trust.
 \cite[p.~352, sect.~3.5, par.~3]{spm}
 
-In the second step the SPM is actually create using a required addition to the  hardware instruction set (\lstil{setProtected}).
+In the second step the SPM is actually created using a required addition to the  hardware instruction set (\lstil{setProtected}).
 The creation process sets up the memory protected boundaries between the three memory sections of the SPM and clears the SSecret section.
 From this point on only the SPM itself is able to modify or destroy itself.
 Since the SPublic and SEntry sections can be read by anyone but can not be modified, they are used to identify an SPM.
@@ -191,7 +191,7 @@ Since the SPublic and SEntry sections can be read by anyone but can not be modif
 In the third step the data must be loaded into the SSecret section.
 This can happen using public data from SPublic but in some instances it might be necessary to load secret data into SSecret, for example cryptographic keys \cite[p.~355, sect.~3.7, par.~1]{spm}.
 This is done using a certain trusted SPM called the "vault" which authenticates the newly loaded SPM and then passes the secret data that is required for initialization.
-The vault SPM itself must get initialized during the boot process requiring that initialization process to be trusted.
+The vault SPM itself must get initialized during the boot process requiring its initialization process to be trusted.
 After the initialization the vault SPM must never be unloaded otherwise initialising new SPMs might not be possible.
 \cite[p.~347,348, sect.~3.1, par.~6,7]{spm}
 
@@ -199,94 +199,97 @@ After completing the initialization phase an SPM can securely use the functional
 To guarantee correct identification the SPMs authenticate each other before communication using the security report.
 Which SPM must be authenticated depends on the type of application.
 
-For example an SPM "SecureRandom" that implements a random number generator must itself be authenticated by the caller while it is not necessary for the SecureRandom to authenticate the caller since it does not leak any secrets.
+For example an SPM "SecureRandom" that implements a random number generator must itself be authenticated by the caller while it is not necessary for the SecureRandom module to authenticate the caller since it does not leak any secrets.
 \cite[p.~352,352, sect.~3.6, par.~3]{spm}
 
-For the initialization of a client SPM with the help of the vault SPM on the other hand, a mutual authentication is necessary since otherwise a malicious vault SPM could inject arbitrary secrets and thereby violate the integrity of the client SPM or a malicious client SPM imporsonating a different SPM could retrieve secrets from the vault SPM which the client SPM should not have access to and thereby violating confidentiality.
+For the initialization of a client SPM with the help of the vault SPM on the other hand, a mutual authentication is necessary since otherwise a malicious vault SPM could inject arbitrary secrets and thereby violate the integrity of the client SPM or a malicious client SPM impersonating a different SPM could retrieve secrets from the vault SPM which the client SPM should not have access to and thereby violates confidentiality.
 \cite[p.~352-354, sect.~3.6, subsect.~2 par.~2]{spm}
 
-However a couple of limitations need to considered for the described state of the SPM architecture for example that an SPM should not be able to be interrupted otherwise secret data could be leaked to the kernel, that the memory sections of an SPM should not be able to be accessed directly using direct memory access (DMA) or that SPMs should not be run concurrently to avoid leaking secret data stored in registers.
+However a couple of limitations need to be considered for the described state of the SPM architecture for example that an SPM should not be able to be interrupted otherwise secret data could be leaked to the kernel, that the memory sections of an SPM should not be able to be accessed directly using direct memory access (DMA) or that SPMs should not be run concurrently to avoid leaking secret data stored in registers.
 \cite[p.~356, sect.~3.9, par.~1,2]{spm}
 
 \paragraph{Trusted Execution Environment}
 \label{sec:tee}
 
-System software which especially includes operating systems (OS) might comprise of complex core components that the system must be able to trust for privileged operations.
+System software which especially includes operating systems (OS) might comprise of complex core components which the system must be able to trust for executing privileged operations.
 The larger and more complex the system and it's components become, the more likely it is that potential vulnerabilities in the code of privileged parts of the system might remain undetected and could therefore be exploited by an attacker.
 To restrict the access of the trusted system parts by parts of the untrusted system, a hardware isolation mechanism called the Trust Execution Environment (TEE) has been proposed.
-The TEE provides an environment for safety critical application to run in separated from large attack surface that the untrusted rich OS provides. %TODO define rich OS
+The TEE provides an environment for safety critical application to run in, separated from the large attack surface that an untrusted traditional OS provides. %TODO define rich OS
 Therefore the confidentiality and integrity of the trusted computations inside the TEE should be guaranteed and the interface between the TEE and the rest of the system is reduced to a minimum. %FIXME The last part does not fit the first part of the sentence!
 \cite[p.~130:8, sect.~3.1, par.~1,2]{tzone}
 
 % What does Trustzone do about the confused deputy problem? How can it know which untrusted program should have which capabilities?
 
 % TODO As example: Arm TrustZone-M or RISC-V MultiZone?
-Many systems can be separated into trusted code which should be safe to run since it is e.g. verified and non-trusted code which may be given by a user who might be malicious. To guarantee that the any untrusted potentially malicious code is not able to directly use trusted parts or ressources of the system it may be necessary to create an isolation between these components that considering different levels of trust in the executed software.
+Many systems can be separated into trusted code which should be safe to run since it is verified in some way (formally, audits, etc.) and non-trusted code which may be given by a user who might be malicious.
+To guarantee that any untrusted potentially malicious code is not able to directly use trusted parts or resources of the system, it may be necessary to create an isolation between these components that considers different levels of trust in the executed software.
 Such isolation of trusted parts of the system might be able to be achieved by so called Trusted Execution Environments (TEEs).
 Examples for hardware that implements TEEs on low-power IoT systems are "Arm TrustZone-M" and "RISC-V MultiZone".
 
-Arm TrustZone-M (TZ-M) is hardware-based security mechanism designed i.~a. for the Cortex-M MCUs \cite[p.~130:5, sect.~2.2, par.~1]{tzone} and provides the processor with two strictly separated protection domains to operate in: the secure world and the normal world.
-The processor may only ever operate in one of these domains at the same time and therefore executes instructions in either a secure or non-secure state.
+Arm TrustZone-M (TZ-M) is a hardware-based security mechanism designed i.~a. for the Cortex-M MCUs \cite[p.~130:5, sect.~2.2, par.~1]{tzone} and provides the processor with two strictly separated protection domains to operate in: the secure world and the normal world.
+The processor may only ever operate in one of these domains at the same time and therefore executes instructions in either a secure or a non-secure state.
 \cite[p.~130:3, sect.~2.1, par.~2]{tzone}
 
 The software in any TZ-M-enabled system is accordingly divided into secure and non-secure software.
 Secure software must be executed by the processor in the secure state and must reside in the secure memory address space %TODO what is the secure address space?
-while it has the permissions to access both secure and non-secure memory addesses and peripherals.
+while it has the permissions to access both secure and non-secure memory addresses and peripherals.
 Examples for secure software are components such as secure-boot, encryption libraries and firmware updates.
 Executing non-secure software on the other hand requires the processor to be in the non-secure state and for the code of the software to reside in a non-secure address space.
 Accordingly non-secure software can exclusively access non-secure memory and peripherals.
-An example for non-secure software is general application code which might include user interactions.
+An example for non-secure software is general application code for example software that end-users might directly interact with.
 \cite[p.~2, sect.~2, par.~2]{tzone_m}
 
 %TODO function calls? \cite[p.~2, sect.~2, par.~4]{tzone_m}
 
-Therefore the address space needs to be partitioned into secure and non-secure areas areas.
-The partitioning is configured by two components: the developer programmable Security Attribution Unit (SAU) which is programmable by secure software developers and the Implementation Defined Attribution Unit (IDAU) which is device specific and configured by the vendor \cite[p.~3, subsect.~2~E, par.~3]{tzone_m}.
+Therefore the address space needs to be partitioned into secure and non-secure areas.
+The partitioning is configured by two components: the developer programmable Security Attribution Unit (SAU) which is programmable by secure software developers and the Implementation Defined Attribution Unit (IDAU) which is device-specific and configured by the vendor \cite[p.~3, subsect.~2~E, par.~3]{tzone_m}.
 Changing the memory space security configuration can only be done by secure software.
 \cite[p.~2, sect.~2, par.~3]{tzone_m}
 To implement the partitioning in the internal SRAM (static random access memory), a block based Memory Protection Controller (MPC) can be used which partitions the address space of the memory peripheral into blocks which can either be marked as secure or non-secure \cite[p.~2,3, sect.~2~A, par.~1,2]{tzone_m}.
-For external memory peripherals such as large flash memories that do not need such granular access control, a watermark based approach can be used for partitioning: the MPC sets a watermark at some address in the memory and all addresses below are considered secure and all addresses above are considered non-secure \cite[p.~3, sect.~2~B, par.~1,2]{tzone_m}.
+For external memory peripherals such as large flash memories that do not need such granular access control properties, a watermark-based approach can be used for partitioning: the MPC sets a watermark at some address in the memory and all addresses below are considered secure and all addresses above are considered non-secure \cite[p.~3, sect.~2~B, par.~1,2]{tzone_m}.
 
-The peripheral bus system of a TZ-M-enabled MCU also distinguishes between secure and non-secure access and therefore any component on the bus can verify and propagate the security level for operations on the bus. The CPU tags each bus access depending on the corresponding security level of the address %TODO which address?
-and wether the processor was in the secure or non-secure state.
+The peripheral bus system of a TZ-M-enabled MCU also distinguishes between secure and non-secure access and therefore any component on the bus can verify and propagate the security level of its operations on the bus. The CPU tags each bus access depending on the corresponding security level of the address %TODO which address?
+and whether the processor was in the secure or non-secure state.
 \cite[p.~2, sect.~2, par.~5]{tzone_m}
 
 %TODO peripherals? section 2 C
 
 A function in a non-secure software on an MCU with enabled TZ-M can not directly call secure code.
 Instead non-secure software must use certain "secure function entry points" \cite[p.~130:5, sect.~2.2, par.~2]{tzone} to communicate with secure software.
-These entry points on the secure side for non-secure callers are marked in the code using the secure gateway instruction (SG) which has been added to the instruction set for that purpose.
+These entry points on the secure side for non-secure callers are marked in the code using the secure gateway instruction (SG) which have been added to the instruction set for that purpose.
 The SG instruction forces a security check on the non-secure side and switches the security state from non-secure to secure.
 This enables the non-secure caller to have application-specific restricted access to secure resources depending on the call.
 On return the CPU transparently switches back to non-secure state.
-For secure code to jump call into non-secure code there exists a reverse process using the BLXNS instuction.
+For secure code to call into non-secure code there exists a reverse process using the BLXNS instuction.
 \cite[p.~4,5, sect.~2~G]{tzone_m}
-On a security state switch from secure to non-secure the CPU registers are stored on the secure stack and cleared so that no secure data is leaked into the non-secure state \cite[p.~5, sect.~2~H, par.~4]{tzone_m}.
+On a security state switch from secure to non-secure, the CPU registers are stored on the secure stack and cleared so that no secure data is leaked into the non-secure state \cite[p.~5, sect.~2~H, par.~4]{tzone_m}.
 
 \begin{figure}[htbp]
 \centerline{\includegraphics[width=0.9\linewidth]{pictures/trustzone.png}}
-\caption{Example scheme of multiple secure software libraries separated using MPU-enforced priviledges in the secure memory \cite[p.~14]{tzone_m_pres}}
+\caption{Example scheme of multiple secure software libraries separated using MPU-enforced privileges in the secure memory \cite[p.~14]{tzone_m_pres}}
 \label{fig:trustzone}
 \end{figure}
 
-It also might be necessary to separate different software of the same security level from each other for example software libraries that are part of the firmware which resides in the secure world (see fig.~\ref{fig:trustzone}).
+It also might be necessary to separate different software of the same security level from each other, for example software libraries that are part of the firmware which resides in the secure world (see fig.~\ref{fig:trustzone}).
 To achieve that, in addition to the distinction into secure and non-secure code as defined by the SAU, the MPU is used to additionally distinguish between privileged and unprivileged code.
-The security and priviledge settings are orthogonal to each other which results in four different security levels for code execution (secure and privileged, secure and unprivileged, etc.).
-This requires both the secure and the nonsecure side of the system to own individual MPU instances.
+The security and privilege settings are orthogonal to each other which results in four different security levels for code execution (secure and privileged, secure and unprivileged, etc.).
+This requires both the secure and the non-secure side of the system to own individual MPU instances.
 \cite[p.~3, sect.~2~D]{tzone_m}
 
 %\subsubsection*{Attack vectors}
 Even though an attacker might not be able to directly access anything inside the TEE, the TEE implementations might still be susceptible to the attacks based on the confused deputy problem.
 If any untrusted program can use the interface to communicate safely with the trusted part of the system, it is not explained how the capabilities of different programs to use the trusted interface are managed. %TODO See if that is actually the case!!
-This could be exploited since the examined implementations ("Arm TrustZone-M" and "RISC-V MultiZone") do not cover access control mechanisms that would be able to prevent such attacks, like capability-based access control mechanisms.
+This could be exploited since the examined implementations ("Arm TrustZone-M" and "RISC-V MultiZone") do not cover such fine-grained access control mechanisms that would be able to prevent such attacks, like capability-based access control mechanisms.
+Apart from that the security state switching mechanisms might also pose certain vulnerabilities depending on the implementation, as shown in case of the MPU in section~\ref{sec:mpu}.
 
-\subsection{Usage of type-safe languages}
+\subsection{Usage of memory- and type-safe languages}
 \label{sec:safelang}
 
 Instead of achieving process isolation using virtual address spaces another approach consists of writing the kernel in a memory-safe programming language while omitting the otherwise needed hardware protection.
-But the high-level languages that can be used in that approach might have some drawbacks like in the case of the Spin or the Singularity OS kernels  %TODO Citation needed?
-which are garbage-collected languages. The garbage collector makes the memory placement and layout in the kernel more complicated and can also cause the system to halt periodically to process accumulated garbage.
-Such a eventual halt of the whole OS can be problematic if a real-time system is required to behave deterministic concerning timings.
+But the high-level languages that can be used in that approach might have some drawbacks like for example in the case of the Spin or the Singularity OS kernels  %TODO Citation needed?
+which are written in garbage-collected languages.
+The garbage collector makes the memory placement and layout in the kernel more complicated and can also cause the system to halt periodically to process accumulated garbage.
+Such an eventual halt of the whole OS can be problematic if a real-time system is required to behave deterministic concerning timings.
 \cite[p.~1, sect.~1, par.~1,2]{rust}
 Therefore the Rust programming language has been proposed as an alternative programming language for system software because it does not rely on a garbage collector for memory management.
 Instead Rust uses lifetime- and scope-tracking to decide when to deallocate values.
@@ -294,19 +297,19 @@ Instead Rust uses lifetime- and scope-tracking to decide when to deallocate valu
 
 Even though most of the kernel's code can be written using safe language features there are some parts of the kernel that require the usage of unsafe code.
 The first category of unsafe code is the one in the included standard library of the programming language which is written by the language developers and consists of e.~g. bounds checks and type casts.
-These second category are the unsafe parts in the kernel itself which include for example the context switches, system call traps and the interrupt handling.
+The second category concerns the unsafe parts in the kernel itself which include for example the context switches, system call traps and the interrupt handling.
 The goal of any safety-concerned OS in a memory-safe language is to reduce these unsafe parts to a minimum.
 \cite[p.~1,2, sect.~1, par.~4]{rust}
 
 \subsubsection*{Attack vectors}
-An attacker would probably first search for vulnerabilities in these unsafe parts of the software that the type-safe part needs to operate since here it might be still possible to perform traditional attacks like buffer overflows.
+An attacker would probably first search for vulnerabilities in these unsafe parts of the software that the safe part needs to operate since there it might be still possible to perform traditional attacks like buffer overflows.
 
 %"While type safety is a good property to have, it is not very strong. The kernel may still misbehave or attempt, for instance, a null pointer access." \cite[p.~218, par.~10]{sel4}
 
 \subsection{Formal verification}
 \label{sec:verify}
 
-There exist different solutions to formally verify software. This report focuses for examining this topic on two examples: The verification of the seL4 microkernel and the usage of TLA+ (extended "Temporal Logic of Actions") specifications.
+There exist different solutions to formally verify software. This report focuses for examining this topic on the verification of the seL4 microkernel as a case-study.
 
 Although formal verification can help to enforce rigorous security and safety properties, its actual value depends on the accuracy and correctness of the modelling process: The system that is verified must be abstractly modelled so that it can be tested against the specification. If the modelling process or parts of the model are erroneous, the guarantees may not be possible to be held. For the seL4 microkernel this concerns especially the assumption about the correctness of the compiler, the assembly code and the hardware.
 Assuming that the assumptions hold, it is possible to guarantee that the seL4 microkernel will never crash and will never perform unsafe operations which makes the behaviour of the kernel predictable in every possible situation. \cite[p.~207, par.~2]{sel4}
@@ -317,11 +320,12 @@ Assuming that the assumptions hold, it is possible to guarantee that the seL4 mi
 \label{fig:sel4}
 \end{figure}
 
-In seL4 the process of formal verification (see fig.~\ref{fig:sel4}) starts with the abstract specification of the operational model which describes the behaviour of the complete system. This includes e.g. the set of available system-calls and their abstract effects or the behaviour in case of an interrupt or fault situation but it does not include any implementation details.
+In seL4 the process of formal verification (see fig.~\ref{fig:sel4}) starts with the abstract specification of the operational model which describes the behaviour of the complete system. This includes e.~g. the set of available system-calls and their abstract effects or the behaviour in case of an interrupt or fault situation but it does not include any implementation details.
 \cite[p.~209, sect.~2.3, par.~3]{sel4}
 
-According to the abstract specification a prototype of the kernel written in a subset of the programming language Haskell. This prototype is used to directly generate a formal executable specification from. This is possible by confining oneself to the mentioned subset of Haskell which can be automatically translated to the language of the used theorem prover ("Isabelle/HOL" \cite[p.~209, sect.~2.3, par.~1]{sel4}). \cite[p.~209, sect.~2.2, par.~2]{sel4}
-The Haskell prototype is therefore "an executable model and implementation of the final design" \cite[p.~209, ch.~2.2, par.~3]{sel4} but the final production kernel is a manual re-implementation of that prototype in the C programming language. The use of the C implementation instead of the Haskell prototype avoids having to verify the significantly large codebase of the Haskell runtime, it avoids the overhead induced by the usage of the garbage collector of the Haskell runtime and it finally makes low-level performance optimizations possible. \cite[p.~209, sect.~2.2, par.~3]{sel4}
+According to the abstract specification a prototype of the kernel is written in a subset of the programming language Haskell.
+This prototype is used to directly generate a formal executable specification from. This is possible by confining oneself to the mentioned subset of Haskell which can be automatically translated to the language of the used theorem prover ("Isabelle/HOL" \cite[p.~209, sect.~2.3, par.~1]{sel4}). \cite[p.~209, sect.~2.2, par.~2]{sel4}
+The Haskell prototype is therefore "an executable model and implementation of the final design" \cite[p.~209, sect.~2.2, par.~3]{sel4} but the final production kernel is a manual re-implementation of that prototype in the C programming language. The use of the C implementation instead of the Haskell prototype avoids having to verify the significantly large codebase of the Haskell runtime, it avoids the overhead induced by the usage of the garbage collector of the Haskell runtime and it finally makes low-level performance optimizations possible. \cite[p.~209, sect.~2.2, par.~3]{sel4}
 
 The prototype itself is not checked against any specification but only the resulting executable specification is, which makes its translation to the executable specification a non-critical part concerning correctness.
 In contrast to the abstract specification the executable specification must include all data structures and implementation details that the final C-implementation does.
@@ -330,11 +334,11 @@ In contrast to the abstract specification the executable specification must incl
 At last the C implementation of seL4 must be verified against the executable specification.
 The formal verification requires programs to have formally-defined semantics.
 Therefore it was necessary for seL4 to create "very exact and faithful formal semantics for a large subset of the C programming language" \cite[p.~209, sect.~2.3, par.~7]{sel4}.
-The result is then translated for the prover to be able to work with it.
+The result is then translated for the theorem prover to be able to work with it.
 \cite[p.~209, sect.~2.3, par.~7]{sel4}
 
 Another less rigorous method would be to write a software specification using a formal specification language like the extended Temporal Logic of Actions (TLA+) \cite{tla} which can then be verified using the accompanying TLC model checker to show the correctness of that specification.
-Even though this is a lot less costly than the elaborate verfification process for the seL4 kernel, this can not guarantee that the actual code for which the specification is written does accurately implement the specification which leaves room for more errors in the development process and therefore could weaken security guarantees that the final software provides.
+Even though this is a lot less costly than the elaborate verification process for the seL4 kernel, this can not guarantee that the actual code for which the specification is written does accurately implement the specification which leaves room for more errors in the development process and therefore could weaken security guarantees that the final software provides.
 
 \subsubsection*{Attack vectors}
 An attacker could search for a weak point in the assumptions that the abstraction or the model of the verification needs to work.
@@ -347,10 +351,11 @@ If an attacker can find errors in the correctness of any of those assumptions it
 
 In order to prevent control-flow hijacking attacks a Control-Flow Integrity (CFI) mechanism can be applied.
 CFI is a security policy which restricts the control-flow of a software to only allow certain so called control-flow transfers. While the control-flow is a certain order of instructions defined by the program's code, a control-flow transfer is an instruction or a sequence of multiple instructions that modify the PC to continue execution at another target adress in the code's memory, like \lstil{jump}, \lstil{branch}, \lstil{call} or \lstil{return}.
-Program languages of a higher abstract level (see sect.~\ref{sec:safelang}) enforce the target adresses to be valid lanugage construct like functions or conditional statements.
-On level of the machine on the other hand (e.~g. assembly language) it is possible for the control-flow transfers to target any address in the system's memory, except for certain restrictions due to the memory management for example if virtual memory is used or if only the code segment can be targeted when enforcing Data Execution Prevention (DEP), which prevents the attacker from executing code written to a data memory segment, using the MPU (see sect.~\ref{sec:mpu}).
+Program languages of a higher abstraction level (see sect.~\ref{sec:safelang}) enforce the target adresses to be valid lanugage constructs like functions or conditional statements.
+On level of the machine on the other hand (including assembly language) it is possible for the control-flow transfers to target any address in the system's memory, except for certain restrictions due to the memory management.
+Such restrictions could be that virtual memory is used or that only the code segment can be targeted when enforcing Data Execution Prevention (DEP), which prevents the attacker from executing code written to a data memory segment, e.~g. by using the MPU (see sect.~\ref{sec:mpu}).
 In a program written in a language with manual memory management (e.~g. C, C++) without any enforced security policy an attacker can use vulnerabilities in the program to route the execution flow to any arbitrary target address.
-CFI mechanisms restrict the set of target addresses of every control flow transfer so that the program can still be executed as expected while making it harder for an attacker to gain control of the execution flow and limiting the consequences of a successfull control-flow hijacking attack so that the attacker can not do arbitrary computations on the system.
+CFI mechanisms restrict the set of target addresses of every control flow transfer, so that the program can still be executed as expected while making it harder for an attacker to gain control of the execution flow and limiting the consequences of a successfull control-flow hijacking attack so that the attacker can not do arbitrary computations on the system.
 Likewise software written in memory- and type-safe programming lanugages could indirectly make use of some of the security guarantees that a CFI mechanism provides since all of them depend on some usage of unsafe code like a VM (see sect.~\ref{sec:vm}) or libraries written in C which again can be secured using CFI.
 \cite[p.~16:4, sect.~2.1, par.~1]{cfi_performance}
 
@@ -358,18 +363,14 @@ CFI mechanisms consist of two phases: the analysis phase and the enforcement pha
 In the analysis phase a control-flow graph (CFG) is constructed which usually happens before the programs runtime.
 The programs code can be considered as split into blocks separated by control-flow transfers (as seen in fig.~\ref{fig:cfi}).
 Thus the CFG is a directed graph consisting of vertices representing such blocks and edges representing control-flow transfers.
-The CFG is therefore a describes the expected control-flow of the program it is generated for.
+The CFG therefore describes the expected control-flow of the program it is generated for.
 In the enforcement phase the CFI mechanism enforces the security policy as defined by the CFG.
-Any control-flow transfer is validated by checking if the control-flow transfer is contained in the set of valid CFG-edges of the current block the program is executing.
-If that is not the case then an attacker must have manipulated the program and the CFI can act accourdingly.
+Any control-flow transfers are validated by checking if the control-flow transfer is contained in the set of valid CFG-edges of the current block the program is executing.
+If that is not the case then an attacker must have manipulated the program and the CFI can act accordingly.
 \cite[S.~16:2, par.~2]{cfi_performance}
 
-The more precise the CFG is constructed the stronger the security guarantees are that the CFI mechanism can provide. \cite[p.~16:4-16:5, sect.~2.1, par.~2]{cfi_performance}
+The more precise the CFG is constructed, the stronger the security guarantees are that the CFI mechanism can provide. \cite[p.~16:4-16:5, sect.~2.1, par.~2]{cfi_performance}
 
-A CFI mechanism usually requires that it is enforced that any code memory segment of the program is read-only since otherwise an attacker could rewrite the instructions that check the control-flow transfers and could therefore nullify any security guarantee of the CFI mechanism.
-Therefore the CFI mechanism does not have to validate direct control-flow transfers from which the target address is static and therefore known at compile time so that an attacker could not modify the address since the code can not be modified.
-The type of control-flow transfers that in constrast have to be validated are the indirect control-flow transfers from which the target address is to be determined at runtime which is the case for transfers that depends on input from a human user or for function returns that are read from the stack.
-%
 \begin{lstlisting}[language=C]
 
 bool lt(int x, int y) {
@@ -389,20 +390,23 @@ sort2(int a[], int b[], int len) {
 \caption{A scheme of a CFG (bottom) of an example program (top). \cite[p.~4:10, fig.~1]{cfi_principles} The boxes in the scheme represent the blocks of code and the arrows represent the control-flow transfers. Direct control-flow transfers are represented by dotted arrows, calls (indirect transfer) by continuous arrows and returns (indirect transfer) by dashed arrows.}
 \label{fig:cfi}
 \end{figure}
-%
+
+A CFI mechanism usually requires that it is enforced that any code memory segment of the program is read-only since otherwise an attacker could rewrite the instructions that check the control-flow transfers and could therefore nullify any security guarantee of the CFI mechanism.
+Therefore the CFI mechanism does not have to validate direct control-flow transfers from which the target address is static and therefore known at compile time so that an attacker could not modify the address since the code can not be modified.
+The type of control-flow transfers that in constrast have to be validated are the indirect control-flow transfers from which the target address is to be determined at runtime which is the case for transfers that depend on an input from an end-user or for function returns that are read from the stack.
 The target addresses of such indirect control-flow transfers can reside in registers as well as in the general memory depending on the program's code and the chosen compiler.
 To enforce the security policy the CFI mechanism has to compare the target address of every indirect control-flow transfer to the set of valid target addresses in the CFG and determine if the program's control-flow still follows the expected edges in the CFG.
 \cite[p.~16:4-16:5, sect.~2.1, par.~3,4]{cfi_performance}
 
 However the CFG individually can not enforce that the return address of a function call actually returns to the address where the function has been called.
 The CFG can not define the exact order that the edges must be traversed therefore returning to a different target address would not be a violation of the security policy as long as the target address corresponds to a valid edge in the CFG.
-This makes it easier for an attacker to use the more numerous possible edges to for example relatively easy perform a return-to-libc attack which is the technique of reusing functions, including especially powerful libc functions like \lstil{system()}, that already exist in the code segment of the vulnerable process \cite[p.~2, sect.~2.2]{bending}.
+This makes it easier for an attacker to use the more numerous possible edges to for example relatively easy perform a return-to-libc attack which is the technique of reusing existing functions, including especially powerful libc functions like \lstil{system()}, that already exist in the code segment of the vulnerable process \cite[p.~2, sect.~2.2]{bending}.
 To enforce that any \lstil{return} instruction actually returns to the caller a CFI mechanism can use a so called protected shadow call stack.
 \cite[p.~4:23, sect.~5.4, par.~1]{cfi_principles}
 
 The shadow call stack can be implemented as a FIFO stack on which the correct return address gets pushed as soon as the corresponding \lstil{call} instruction is executed.
 On every execution of a \lstil{return} instruction the corresponding correct return address gets popped from the stack and can be compared to the return address on the normal stack.
-If both return addresses differ an attacker might have manipulated the stack and the CFI can act accourdingly.
+If both return addresses differ, an attacker might have manipulated the stack and the CFI can act accordingly.
 \cite[p.~6, sect.~3.4, par.~1-3]{stackghost}
 
 To make sure that the shadow call stack can not be manipulated by an attacker to bypass the restrictions it is necessary for the shadow call stack to reside in a protected memory area.
@@ -420,19 +424,19 @@ However even though CaRE is designed for low-power microcontrollers the performa
 %TODO For a more efficient implementation of shadow stacks: [Silhouette: Efficient Protected Shadow Stacks for Embedded Systems](https://www.usenix.org/system/files/sec20-zhou-jie.pdf)
 
 \subsubsection*{Attack vectors}
-Assuming a supposed perfectly enforced CFI policy and attacker is not able to corrupt any pointer to a code segment like a return adress or a function pointer which will be called a code pointer hereafter.
+Assuming a supposed perfectly enforced CFI policy, an attacker is not able to corrupt any pointer to a code segment like a return address or a function pointer which will be called a code pointer hereafter.
 Still an attacker might be able to manipulate the parameters to a sensitive function like \lstil{execve} in libc to execute arbitrary programs or overwrite configuration files which might influence the execution of certain security checks.
 \cite[p.~2,3, sect.~2.4]{bending}
 Based on that idea the Control-Flow Bending (CFB) technique has been proposed \cite[p.~3, sect.~2.5]{bending} to attack CFI enforced software.
 In addition to exclusively manipulating data instead of code pointers, CFB also allows to manipulate code pointers as long as they adhere to the CFI security policy which again means that any target address of an indirect control-flow transfer must be in the set of valid target addresses according to the CFG.
 In a CFB attack the attacker therefore "bends" the control-flow of the software to achieve his goals while still ensuring not to violate the CFI policy.
 \cite[p.~3, sect.~2.5]{bending}
-This makes it important to use a precise CFG since the less precise our CFG is the more options the attacker has to bend the control flow since the number of possible target addresses per indirect control-flow transfer increases.
+This makes it important to use a precise CFG since the less precise our CFG is, the more options the attacker has to bend the control flow since the number of possible target addresses per indirect control-flow transfer increases.
 Also the usage of the protected shadow stack is necessary since then the attacker is forced to return to the caller address instead of an arbitrary valid target address.
-But even in a shadow-stack-enabled hypothetical maximally precise CFI policy a CFB attack would still be possible using so called dispatcher functions \cite[p.~7, sect.~6.1.1, par.~2]{bending}.
+But even assuming the enforcement of a shadow-stack-enabled hypothetical maximally precise CFI policy a CFB attack could still be possible using so called dispatcher functions \cite[p.~7, sect.~6.1.1, par.~2]{bending}.
 
 Dispatcher functions are functions that are able to overwrite their own return address if called with certain arguments.
-As long as the return address returns to a location where the dispatcher function has been called it still adheres to the CFI policy.
+As long as the return address returns to a location where the dispatcher function has been called, it still adheres to the CFI policy.
 An attacker therefore has to locate the usage of such a dispatcher function and a vulnerability to control its arguments.
 The trivial example for such a dispatcher function would be \lstil{memcopy()} in libc.
 An attacker that controlls the arguments of \lstil{memcopy()} could point the source buffer to a memory location controlled by the attacker, the target buffer to the return address of \lstil{memcopy()} and the length to the word size.
@@ -446,17 +450,17 @@ Using certain dispatcher functions, in particular \lstil{printf()}, with suitabl
 \label{sec:vm}
 
 Memory safety can also be achieved by using virtualization.
-A virtual machine (VM) that runs on the hardware can run programs with languages of higher abstraction levels than e.g. the C language.
+A virtual machine (VM) that runs on the hardware can run programs with languages of higher abstraction levels than e.~g. the C language.
 That could make it possible to provide safety as well as portability properties. % TODO Remove any mentions of other properties than safety/security since it does not matter in this context?
-The problem with virtual machines on low-power hardware like microcontrollers is their overhead in memory and processing power since the performance that low-level lanugages like C provide often is necessary to be able to run the programs fast enough, so that e.g. the performance requirements for real-time systems are met. %TODO Citation needed?
+The problem with virtual machines on low-power hardware like microcontrollers is their overhead in memory and processing power since the performance that low-level lanugages like C provide often is necessary to be able to run the programs fast enough, so that e.~g. the performance requirements for real-time systems are met. %TODO Citation needed?
 
 An example for such a VM is WebAssembly (Wasm) which is is designed to isolate different processes in web browsers \cite[p.~3, sect.~4.1, par.~2]{femto_containers} but can be used more generally as well.
-Wasm provides certain security guarantees by specification: memory safety and CFI (see sect.~\ref{sec:cfi}).
+Wasm provides certain security guarantees by specification: memory safety and CFI enforcement (see sect.~\ref{sec:cfi}).
 To ensure memory safety the code executed in the Wasm sandbox gets a specific "contiguous linear memory region" assigned to it.
 Within this memory region the code can access memory and do operations.
-Any access outside of the own memory region is a violation of the security policy enforced by the Wasm runtime which translates the linear memory addresses and does the corresponding bounds checking to prevent violation.
-In addition to that the Wasm runtime implements a CFI mechanism.
-Similiar to the CFI mechanism described in section \ref{sec:cfi}, Wasm validates function pointers called by the sandboxed code.
+Any access outside of its own memory region is a violation of the security policy enforced by the Wasm runtime which translates the linear memory addresses and does the corresponding bounds checking to prevent violation.
+In addition to that, the Wasm runtime implements a CFI mechanism.
+Similar to the CFI mechanism described in section \ref{sec:cfi}, Wasm validates function pointers called by the sand-boxed code.
 Such function pointers must be present in a runtime table that holds the valid function pointer entries.
 The Wasm runtime considers it a violation if the called function pointer is not valid or the type of caller does not match the type of the function.
 %
@@ -466,38 +470,39 @@ The Wasm runtime considers it a violation if the called function pointer is not
 \label{fig:wasm}
 \end{figure}
 %
-Similiar to the protected shadow call stack in section \ref{sec:cfi} Wasm separates the stack into a data stack and an execution stack as depicted in fig.~\ref{fig:wasm}.
-The data stack is located within the linear memory region of the sandboxed code and holds stack-allocated variables while the execution stack, which contains the target addresses for calls, returns, etc., resides outside of that memory region and therefore prevents traditional stack smashing attacks that overwrite return addresses.
+Similar to the protected shadow call stack in section \ref{sec:cfi}, Wasm separates the stack into a data stack and an execution stack as depicted in fig.~\ref{fig:wasm}.
+The data stack is located within the linear memory region of the sand-boxed code and holds stack-allocated variables while the execution stack, which contains the target addresses for calls, returns, etc., resides outside of that memory region.
+That prevents traditional stack smashing attacks that overwrite return addresses.
 \cite[p.~3,4, sect.~3, par.~1-7]{ewasm}
 
 %TODO First Wasm stuff now one sentence concerning MCU implementations? That may not be sufficient?!
 
 There are multiple different implementations of the Wasm VMs for MCUs such as eWASM \cite{ewasm} or WARDuino \cite{warduino}.
-The execution of a benchmark in the eWASM implementation was slower than the native C implementation by a factor of two \cite[p.~8,9, sect.~6~A, par.~2]{ewasm} which is relatively fast but the implementation did make some security tradeoffs for performance reasons like not using the MPU for memory access control \cite[p.~11,13, sect.~7, par.~5]{ewasm} but the authors did not go further into the possible attack vectors regarding that limitation.
+The execution of a benchmark in the eWASM implementation was slower than the native C implementation by a factor of two \cite[p.~8,9, sect.~6~A, par.~2]{ewasm} which is relatively fast but the implementation did make some security trade-offs for performance reasons like not using the MPU for memory access control \cite[p.~11,13, sect.~7, par.~5]{ewasm} but the authors did not go further into the possible attack vectors regarding that limitation.
 
-Another rather similiar approach to using a VM, that also could guarantee e.~g. memory safety properties, would be to use scripted logic interpreters to virtualize processes on a Microcontroller.
-Examples for implementations are Micropython, which partially implements an interpreter for the python programming language, or RIOTjs which makes use of the JerryScript sripted logic interpreter to execute Javascript code.
+Another rather similar approach to using a VM, that also could guarantee e.~g. memory safety properties, would be to use scripted logic interpreters to implement virtualization on a Microcontroller.
+Examples for implementations are Micropython, which partially implements an interpreter for the python programming language, or RIOTjs which makes use of the JerryScript scripted logic interpreter to execute Javascript code.
 Depending on the use case, it may not be sufficient though to use one of these implementations because they do not specifically offer process isolation guarantees without additional mechanisms.
 \cite[p.~3,4, sect.~4.1, par.~3]{femto_containers}
 
 \subsubsection*{Attack vectors}
-Even though software running in the Wasm VM can only directly manipulate the linear memory inside the VM this still can make control-flow hijacking attacks dangerous depending on the external functions used inside the software.
-It one example it has been shown that it is possible to get arbitrary file write access by exploiting an application that reads a appends a constant string to a file that is statically defined in the software.
+Even though software running in the Wasm VM can only directly manipulate the linear memory inside the VM, control-flow hijacking attacks might still be possible depending on the external functions used inside the software.
+In one example it has been shown that it is possible to get arbitrary file write access by exploiting an application that appends a constant string to a file defined by a filename that is statically set in the software.
 Assuming that the software has a vulnerability that makes a buffer overflow possible, an attacker might be able to overwrite data on the stack.
-Even though the call stack outside of the linear memory of the software is not directly accessable, the data stack inside the linear memory is.
-The strings that define content of the constant string and the filename that the constant string should be appended to reside inside the data stack which is inside the linear memory that the software has access to.
+Even though the call stack outside of the linear memory of the software is not directly accessible, the data stack inside the linear memory is.
+The strings that define the contents of the constant string and the filename that the constant string should be appended to, reside inside the data stack which is inside the linear memory that the software has access to.
 An attacker that found a buffer overflow vulnerability might therefore be able to overwrite the filename as well as the constant string with arbitrary chosen values and therefore gain arbitrary access to write files on the system outside of the VM assuming that no other security policies are enforced by the system the VM is running on.
 \cite[p.~10, sect.~5.2]{wasm_attack}
 
 \section{Case-study: Tock OS}
 \label{sec:tock}
 
-The Tock OS aims to combine the usage of type-safe languages (see sect.~\ref{sec:safelang}) and the usage of Hardware architecture enhancements (see sect.~\ref{sec:hardware_arch_enhance}) in the form of the MPU (see sect.~\ref{sec:mpu}) to achieve it's stated objectives:
+The Tock OS aims to combine the usage of type- and memory-safe languages (see sect.~\ref{sec:safelang}) and the usage of Hardware architecture enhancements (see sect.~\ref{sec:hardware_arch_enhance}) in the form of the MPU (see sect.~\ref{sec:mpu}) to achieve it's stated objectives:
 dependability, concurrency, efficiency and fault isolation.
 \cite[p.~236, sect.~2.2]{tockos}
 
 Since the focus of this report lies in the security properties of OSs for low-power systems, the following examinations will mostly concern the ojective of fault isolation.
-The other objectives might also be mentioned but only in so far as they contribute to the security properties in some way.
+The other objectives might also be mentioned but only in so far as they concern the security properties in some way.
 
 In an OS that consists of multiple components and may even support multiple applications, memory faults may happen in one component or application which may in turn corrupt the state of other components or applications.
 Let an actor be either a component or application on the examined system, for conciseness purposes.
@@ -514,7 +519,7 @@ The following part of this section covers how Tock OS aims to achieve the fault
 \label{fig:tock_arch}
 \end{figure}
 
-Tock OS has an architecture (see fig.~\ref{fig:tock_arch}) that divides the it's software in two classes: capsules and processes.
+Tock OS has an architecture (see fig.~\ref{fig:tock_arch}) that divides it's software in two classes: capsules and processes.
 Capsules are software modules that are part of the kernel which are secured at compile time by the constraints that the programming language enforces (see sect.~\ref{sec:safelang}) and therefore can not be loaded at runtime \cite[p.~238, sect.~3.2, par.~3]{tockos}.
 Processes in contrast can be loaded at runtime and are analogous to the prevalent concept of processes in other OSs: they are isolated from each other with hardware-support using the MPU (see sect.~\ref{sec:mpu}) and can only interact with the kernel using system calls.
 Capsules are scheduled cooperatively and therefore can not interrupt each other while processes are scheduled preemptively and therefore can be interrupted causing a context switch if required.
@@ -523,30 +528,31 @@ Capsules are scheduled cooperatively and therefore can not interrupt each other
 \subsubsection{Threat Model}
 \label{sec:tock_threats}
 
-Tock OS separates the threats accourding to the four stakeholders of the system: "board integrators, kernel component developers, application developers, and end-users" \cite[p.~237, sect.~3.1, par.~1]{tockos}
+Tock OS separates the threats according to the four stakeholders of the system: "board integrators, kernel component developers, application developers, and end-users" \cite[p.~237, sect.~3.1, par.~1]{tockos}
 
 Board integrators are the most trusted stakeholder as they have complete control over the firmware of the MCU since they combine the Tock kernel and the MCU-specifc parts of the code.
 The kernel component developers create the capsules which define much of the kernels functionality, which includes for example peripheral drivers.
 It is assumed that the board integrators audit the source code of the capsules before compiling the kernel.
-However the kernel component developers are not trusted to protect confidentiality and integrity of other capsules therefore they may starve the CPU but they can not violate capsule isolation like performing unauthorized access to peripherals.
-Application developers build processes that provide functionality to end-users and use the functionality by the kernel and its capsules that is provided to them.
-Applications, running as processes, are considered malicious since the board integrators might not be able to audit the code therefore the should not be able to violate confidentiality, integrity and availability of other parts of the system and can therefore can also not starve the system in contrast to capsules.
-End-users can install, replace or update applications and may interact with I/O periphery but they are also considered untrusted from the perspective of the OS.
+However the kernel component developers are not trusted to protect confidentiality and integrity of other capsules, therefore they may starve the CPU but they can not violate capsule isolation like performing unauthorized access to peripherals.
+Application developers build applications which run as processes that provide functionality to end-users and use the functionality provided by the kernel and its capsules.
+Applications are considered malicious since the board integrators might not be able to audit the code therefore they should not be able to violate the confidentiality, integrity and availability of other parts of the system and therefore also can not starve the system in contrast to capsules.
+End-users can install, replace or update applications and may interact with I/O periphery but are also considered untrusted from the perspective of the OS.
 \cite[p.~237, sect.~3.1, par.~$\geq$2]{tockos}
 
 \subsubsection{Capsules}
 \label{sec:capsule}
 
 %TODO Add that capsules share a stack
-To be able to enforce certain constraints in the capsules, as components of the kernel, it is necessary for a capsule to be written in a memory-safe and type-safe language to prevent issues, like buffer overflows and use-after-free, prevalent in unsafe, low-level languages like C.
-Due to capsules being scheduled cooperatively, they must be trusted to ensure to not limit the capabilities of other capsules, concerning ressource and timing requirements. A malicious capsule could for example starve other capsules of processing time because they can not interrupt each other.
+To be able to enforce certain security constraints in the capsules, as components of the kernel, it is necessary for a capsule to be written in a memory-safe and type-safe language to prevent issues, like buffer overflows and use-after-free vulnerabilities, prevalent in unsafe, low-level languages like C.
+Due to capsules being scheduled cooperatively, they must be trusted to ensure to not limit the capabilities of other capsules, concerning resource and timing requirements. A malicious capsule could for example starve other capsules of processing time because they can not interrupt each other.
 \cite[p.~238, sect.~3.2, par.~1,2]{tockos}
 However the Tock threat model \cite[p.~237, sect.~3.1]{tockos} assumes that capsules, for example drivers for third party hardware components supplied by the vendors, will be audited before being compiled into the kernel.
-In the case that bugs or malicious parts of capsules are not identified by the audit, Tock still aims to guarantee the confidentiality and integrity of the other capsules even if availability is may not be guaranteed.
-Therefore a malicious or erroneous capsule might block ressources or restart the system but it can not aquire access permissions for memory or hardware that it is not supposed to have access to.
+In the case that bugs or malicious parts of capsules are not identified by the audit, Tock still aims to guarantee the confidentiality and integrity of the other capsules even if availability is not guaranteed.
+Therefore a malicious or erroneous capsule might block resources or restart the system but it can not aquire access permissions for memory or hardware that it is not supposed to have access to.
 \cite[p.~237, sect.~3.1, par.~3]{tockos}
 
-But different capsules in Tock OS may require different levels of trust: core kernel capsules, e.g. the process scheduler, must be able to directly manipulate CPU registers which requires a high level of trust, while on the other hand some peripheral drivers which may only use hardware-agnostic interfaces to communicate with the peripherals could be treated as fully untrusted without limiting functionality.
+But different capsules in Tock OS may require different levels of trust: core kernel capsules, such as the process scheduler, must be able to directly manipulate CPU registers which requires a high level of trust.
+On the other hand some peripheral drivers which may only use hardware-agnostic interfaces to communicate with the peripherals could be treated as fully untrusted without limiting functionality.
 The capsules that require a high level of trust by being able to perform actions outside the Rust type system by directly interacting with the hardware %like casting memory mapped registers to type-safe structs 
 are limited to a small number of capsules.
 By limiting the number of trusted capsules and building other more complex capsules with the abstractions provided by those fewer trusted capsules, Tock OS aims to maintain it's confidentiality and integrity goals.
@@ -557,10 +563,10 @@ By limiting the number of trusted capsules and building other more complex capsu
 \subsubsection{Processes}
 \label{sec:process}
 
-In contrast to capsules which are isolated due to the memory- and type-safety properties of the Rust language, processes in Tock OS are hardware-solated.
-Every process has it's own independent memory region with a separate stack, heap and static variables which enables preemptive scheduling, to make long-running computations interruptable and possibly provide a fair distribution of computing time between processes if that is desired (Tock OS uses a round-robin scheduler \cite[p.~239, sect.~3.3, par.~1]{tockos}).
+In contrast to capsules which are isolated due to the memory- and type-safety properties of the Rust language, processes in Tock OS are hardware-isolated.
+Every process has it's own independent memory region with a separate stack, heap and static variables which enables preemptive scheduling, to make long-running computations interruptible and possibly provide a fair distribution of computing time between processes if that is desired (Tock OS uses a round-robin scheduler \cite[p.~239, sect.~3.3, par.~1]{tockos}).
 
-Even though the MPU can provide memory-isolation it does not enable virtual memory so processes are not provided with the illusion of infinite memory and all used addresses in the system are absolute, physical ones (see sect.~\ref{sec:mpu}).
+Even though the MPU can provide a certain level of memory-isolation, it does not enable virtual memory so processes are not provided with the illusion of infinite memory and all used addresses in the system are absolute, physical ones (see sect.~\ref{sec:mpu}).
 Also no code in form of shared libraries is shared between different processes.
 Due to the hardware-isolation, the programming language used in the processes is not limited to Rust but can also be C or other languages without compromising the stated safety properties (see sect.~\ref{sec:tock}).
 Processes interact with the kernel using a system-call interface and between each other using an Inter Process Communication (IPC) implementation.
@@ -572,8 +578,8 @@ In the current state low-latency direct hardware access for processes, for perfo
 \subsubsection{Kernel interface}
 
 The interface for applications to communicate with the kernel in Tock OS consists of five systems calls: \lstil{command}, \lstil{allow}, \lstil{subscribe}, \lstil{memop} and \lstil{yield} \cite[p.~240, tab.~3]{tockos}.
-It is geared to event-driven systems and therefore any system call is non-blocking \cite[p.~239, sect.~3.3, par.~2]{tockos}, in contrast to other system call interfaces, such as in Linux.
-Using the \lstil{command} call processes can make requests to capsules with integer arguments, e.g. to configure timers.
+It is geared to event-driven systems and therefore most system calls are non-blocking \cite[p.~239, sect.~3.3, par.~2]{tockos}, in contrast to other system call interfaces, such as in Linux.
+Using the \lstil{command} call processes can make requests to capsules with integer arguments, for example to configure timers.
 The \lstil{allow} call allows a process to pass more complex data to a capsule using a data buffer, which gets verified by the kernel by checking if the specified pointer and length of the buffer lies within the process's memory. %TODO analogous to Self-Protecting Modules?
 The \lstil{subscribe} call can be used by a process to create a callback for a request to a specific capsule, for example a notification for a network packet arrival.
 The callback will be invoked by the capsule as soon as the packet arrived and the corresponding process can then \lstil{yield} the result which blocks the process until the callback has been processed.
@@ -583,24 +589,24 @@ At last \lstil{memop} can be used to increase the heap size of a process.
 \subsection{Grants}
 \label{sec:grants}
 
-To be able to for example answer a request from a process, a capsule might need to dynamically allocate memory for the response.
-Tock uses so called grants to solve this problem. A grant is a granted separate section of memory, including and API to access it, in the memory space of each process that the kernel can use as a heap for operations concerning the corresponding process.
-Since that kernel heap is therefore distributed over the memory of the processes, grant allocation do not interfere with each other since they do not share the memory region.
-If a process fails the ressources of the grant can be freed easily since they are bound to the process instead of the kernel.
-A capsule can create a grant as being of any abritrary type or data structure.
+To be able to for example answer a request from a process, a capsule might need to dynamically allocate memory for the response even though the memory of the capsules itself is statically allocated.
+Tock uses so called grants to solve this problem. A grant is a granted separate section of memory, including an API to access it, in the memory space of each process that the capsules can use as a heap for operations concerning the corresponding process.
+Since that kernel heap is therefore distributed over the memory of the processes, grant allocations do not interfere with each other since they do not share the same memory region.
+If a process fails, the resources of the grant can be freed easily since they are bound to the process instead of the kernel.
+A capsule can create a grant as being of any arbitrary type or data structure.
 Grants do not initially allocate memory: It is only conceptual until a process actually makes use of it by calling a certain method on the grant (\lstil{enter}).
 To preserve safety when using grants the kernel enforces that memory allocated by capsules does not break the type system, that grants can only be used as long as the corresponding process is still alive and finally that the grant memory can be freed if the corresponding process terminates.
 \cite[p.~240, sect.~4]{tockos}
 
 \begin{figure}[htbp]
 \centerline{\includegraphics[width=\linewidth]{pictures/tock_grants.png}}
-\caption{The memory layout of a process in Tock OS which consists of the traditional process-accessable heap, data and stack regions and the processor-inaccessable grant region on top which belongs to the protected kernel state \cite[p.~242, fig.~4]{tockos}}
+\caption{The memory layout of a process in Tock OS which consists of the traditional process-accessible heap, data and stack regions and the processor-inaccessible grant region on top which belongs to the protected kernel state \cite[p.~242, fig.~4]{tockos}}
 \label{fig:grants}
 \end{figure}
 
 Tock makes further use of the MPU to enforce that a process can not directly access the grant memory in it's memory region, otherwise the process could directly read or write kernel data structures and violate the guarantees that the Rust language provides for the kernel (see fig.~\ref{fig:grants}).
 \cite[p.~240, sect.~4.1]{tockos}
-To enforce that a capsule can not access grant memory of dead processes, Tock enforces that the kernel never runs in parallel with the processes and additionally always checks the validity of the process identifier that the grant memory that is going to be accessed belongs to.
+To enforce that a capsule can not access the grant memory of dead processes, Tock enforces that the kernel never runs in parallel with the processes and additionally always checks the validity of the corresponding process identifier.
 \cite[p.~240, sect.~4.2]{tockos}
 
 \subsection{MPU usage in Tock}
@@ -614,22 +620,22 @@ In contrast to the processes, the MPU is disabled, as long as the kernel is acti
 %\subsection{Evalutation} %TODO of Tock OS against what? by which measure? Maybe in compairison to the other approaches? How they could be used to enhance the security properties even more?
 \section{Evalutation}
 
-Even though Tock OS already makes use of a type-safe language (see sect.~\ref{sec:safelang}) in form of Rust and of hardware architecture enhancements (see sect.~\ref{sec:hardware_arch_enhance}) like the MPU (see sect.~\ref{sec:mpu}) it could be possible that Tock OS might also benefit from other approaches, including the others mentioned in section~\ref{sec:approaches}.
+Even though Tock OS already makes use of a safe language (see sect.~\ref{sec:safelang}) in form of Rust and of hardware architecture enhancements (see sect.~\ref{sec:hardware_arch_enhance}) like the MPU (see sect.~\ref{sec:mpu}) it could be possible that Tock OS might also benefit from other approaches, including the others mentioned in section~\ref{sec:approaches}.
 
-In addition to the MPU, one of the the PMA (see sect.~\ref{sec:pma}) approaches might also be beneficial to the security properties of especially the trusted capsules in the kernel (see sect.~\ref{sec:capsule}) since the MPU might not be as powerful of an isolation mechanism as for examble a TEE.
+In addition to the MPU, one of the the PMA (see sect.~\ref{sec:pma}) approaches might also be beneficial to the security properties of especially the trusted capsules in the kernel (see sect.~\ref{sec:capsule}) since the MPU might not be as powerful of an isolation mechanism as for example a TEE.
 
 Formal verification (see sect.~\ref{sec:verify}) is currently not applied in any form in Tock OS.
 This might not be especially beneficial for the implementation of the processes or the trusted modules, since these are secured using hardware and the features of the Rust language, assuming that the implementation of the hardware and the Rust compiler is correct.
-But since the kernel's core and the trusted capsules have to use parts of unsafe code to function, especially when directly manipulating the hardware, formal verification of these trusted und partly unsafe parts of the OS could be used to identify critical bugs and design flaws, as stated by the project itself \cite[p.~248, 249, sect.~7, par.~6]{tockos}.
+But since the kernel's core and the trusted capsules have to use parts of unsafe code to function, especially when directly manipulating the hardware, formal verification of these trusted and partly unsafe parts of the OS could be used to identify critical bugs and design flaws, as stated by the project itself \cite[p.~248, 249, sect.~7, par.~6]{tockos}.
 
-Even though virtualization and CFI could be applied additionally for individual processes or capsules the resulting benefit is not clear but especially since processes can be written in any language, it could be beneficial to apply a CFI mechanism for the internal integrity of the software even though it should not be possible for the attacker to corrupt other processes or capsules assuming that the security guarantees hold.
+Even though virtualization and CFI could be applied additionally for individual processes or capsules, the resulting benefit is not clear but especially since processes can be written in any language, it could be beneficial to apply a CFI mechanism for the internal integrity of the software even though it should not be possible for the attacker to corrupt other processes or capsules assuming that the security guarantees hold.
 
 \section{Conclusion}
 \label{sec:conclusion}
 
-Especially due to the constrained ressources securing low-power IoT system software will continue to be a challenging problem.
-Since the requirement of the devices to use less power, have smaller form-factors and be cheaper will also probably not change the mentioned hardware ressource limitations will presumably pervail for the foreseeable future \cite[p.~249, sect.~8, par.~1]{tockos}.
-But as these device become more and more integrated in every aspect of our live it will become even more important to ensure certain security guarantees to provide these systems with the necessary confidentiality, integrity and availability.
+Especially due to the constrained resources of the corresponding devices, securing low-power IoT system software will continue to be a challenging problem.
+Since the requirements of the devices to use less power, have smaller form-factors and be cheaper will also probably not change, the mentioned hardware resource limitations will presumably pervail for the foreseeable future \cite[p.~249, sect.~8, par.~1]{tockos}.
+But as these device become more and more integrated in every aspect of our live, it will become even more important to ensure certain security guarantees to provide these systems with the necessary confidentiality, integrity and availability.
 The presented approaches might be able to provide some of the security properties that are needed to reach the goal of a widespread usage of sufficiently secure low-power IoT system software that is able to protect itself and its users.
 
 %\section{Prospects}