Saturday, July 25, 2015

[HOWTO] Build and run seL4 on RISC-V targets

This post gives instructions how to build seL4 to run on RISC-V targets (currently Spike simulator and Rocket Chip/FPGA). The default, and currently only, application is SOS [1] which is a simple operating system running on top of seL4. This means other simple applications can be developed based on this seL4/RISC-V port.

Prerequisites

The development environment is Linux, you need the following tools installed before pursuing with the build/run process:
  • riscv32-unknown-elf- [2]
  • Spike  [3]
  • fesvr [4]
  • Python
  • git
  • gpg
Optional (If you want to build/run seL4 on Rocket Chip/FPGA):
  • Xilinx/Vivado 
  • Scala
  • Chisel

Build/Run seL4/RISC-V

Assuming all the previous packages are installed, the build system (and steps) are exactly the same as of other seL4 projects here [5], but it uses my own repos because the port is not upstream (yet).

1- Get repo
mkdir -p ~/bin
export PATH=~/bin:$PATH
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod a+x ~/bin/repo
2- Fetch seL4/RISC-V project
 
The following commands fetche seL4 microkernel, SOS, tools, and all the libraries required to build a complete seL4/RISC-V system (currently SOS project):

mkdir seL4riscv
cd seL4riscv
repo init -u https://github.com/heshamelmatary/sel4riscv-manifest.git
repo sync 
3- Build seL4/RISC-V project
 
The default project is SOS that runs on Spike, RV32 mode, SV32 memory system. First type:
make riscv_defconfig
Make sure ROCKET_CHIP option is  UNCHECKED:

make menuconfig
seL4 kernel > seL4 System
 make
You should find the sos image under images directory.

SOS binary images

 4- Run seL4/RISC-V (SV32) on Spike (and jor1k):

Running SOS on spike is easy, just type the following command and you should see some interesting output (for more details about SOS see [6]):

make simulate-spike

SOS running on seL4 on RISC-V 

seL4/RV32/SV39 (the same image output from the previous steps) can run on jor1k [8], however jor1k for RISC-V is still under development, and may not work properly.

* Build and run seL4/RISC-V (SV39) on Spike/Rocket

Building seL4 for RV64 follows almost the same previous steps, the differences are followed:

1- Assuming you got all the required repos (see steps 1 and 2 above), change the kernel branch to point to sel4Rocket:

cd kernel/
git checkout sel4Rocket
cd ..

2- This time make sure ROCKET_CHIP is CHECKED

make riscv_defconfig
make menuconfig

ROCKET_CHIP config option checked
Save and exit.

3- build and run seL4/RV64/SV39

make 
make simulate-spike64

The same seL4/SV39 image can run on Rocket Chip on FPGA. Just follow the instructions how to build the Rocket Chip on FPGAs [7]. Note that you have to build all the FPGA-related components along with the software tools (zynq-fesvr)  from scratch to get the latest privileged-spec compliant tools (the prebuilt images are not updated).

If you've followed the previous steps and found any issues, just let me know. Your feedback, bug reports, feature-addition requests are welcomed.

References


[1] seL4 on RISC-V is running SOS (Simple Operating System)
[2] https://github.com/riscv/riscv-gnu-toolchain/
[3] https://github.com/riscv/riscv-isa-sim
[4] https://github.com/riscv/riscv-fesvr
[5] http://sel4.systems/Download/
[6] http://www.cse.unsw.edu.au/~cs9242/14/project/framework.shtml
[7] https://github.com/ucb-bar/fpga-zynq
[8] http://jor1k.com/jor1k/demos/riscv.html

Friday, July 17, 2015

seL4 runs on Rocket Chip (RISCV/FPGA)


Abstract

After running on Spike simulator, seL4 can now run on the latest up-to-date version of Rocket Chip code on FPGA, the first hardware platform that seL4/RISC-V port can run on. Moreover, seL4 runs on the online jor1k emulator [1]. This can be considered as a starting point for both RISC-V and seL4 to experiment some new security-related and/or scalability solutions based on the flexibility to easily modify the hardware according to seL4 requirements (or vice versa), given that both are open-source, and under research development.

Details

Previously seL4/RISC-V was only running on RV32 (RISC-V 32-bit mode) which assumes SV32 (Page-Based 32-bit Virtual-Memory Systems) to work on. SV32 was only supported by Spike simulator, and that's why seL4 could, previously, only run there. The 32-bit seL4 has been progressing till the point that it is able to run, not only a hello world application, but it can run another simple operating system above it [2] that can fork other applications. That seems to be a good progress, but it would be better to have it running on a real hardware.

The issue was that the only open-source RISC-V hardware is currently Rocket Chip, which doesn't support 32-bit (SV32) mode, only RV64 (SV39 and SV48 virtual memory systems that run only on RISC-V 64-bit mode), and since there's no 64-bit support on seL4 (yet), it would have been hard and time-consuming to re-factor the entire seL4 code to run 64-bit code (including pointers, variables, data structures, scripts, etc).

How does 32-bit seL4 run on RV64/SV39?


The workaround was to keep running 32-bit seL4, but with modifications to the RISC-V low-level target-dependent MMU handling code to run on RV64/SV39 memory system without touching the target-independent seL4 code. This was possible because both RV32 and RV64 execute 32-bit fixed instructions. So basically, only the memory configuration was required (apart from HTIF code) to be modified to make this step possible, including initialization and page-tables layout. This means that seL4 is still being built and compiled using riscv32-* toolchain.

The seL4 components that had to be changed are:
  • vspace.c
  • elfloader
vspace.c is an architecture-dependent core file (for all seL4 ports) for MMU handling in seL4 microkernel. A new file, vspace64.c was added to run on RV64 mode. The difference between the 32-bit version is that the 4KiB pages should now follow 3 levels page-tables implementation instead of just 2 levels (SV32).

elfloader is just statically mapping the seL4 microkernel image at 2-level page-table, 2MiB pages granularity, while it maps seL4 at 4MiB granularity (just one level page-table) on SV32
 
RV32/SV32 implementation of seL4 can cover 4GiB address space, while RV64/SV39 extends this to 2^9 GiB. seL4 port only uses 256 MiB, so this amount of SV39 memory wasn't needed. This made it simpler by using only two entries in the first-level page-table: one for the first GiB (reserved for applications use), and the second one for the kernel page-tables. The first-level page-table (AKA page-directory) is then shared between all applications, but write accesses is only  exclusive to seL4 microkernel. New applications (address spaces) are just allocating/filling 2nd-level (and if necessary 3rd-level to provide 4KiB pages) page-tables without worrying about the first level. During context switches, address space change is done by writing the address of the second-level page table (allocated by applications) into the first entry of the page-directory rather than updating the sptbr register. This solution has a limitation of restricting application address space mapping to the first GiB virtual address range, but it has the benefit of optimizing both memory and time. Memory optimization is achieved by the fact that there is no need to allocate 3-level page-tables when creating a new task, only two (or even one) as with SV32. From timing perspective, seL4 is copying the kernel mapping for each created task, this is no longer needed since this kernel mapping lies on a separate 2nd entry, covering the 256 MiB of the second GiB virtual memory of the page directory. The SV39 mapping of seL4 is shown in the next figure.

seL4 mapping on RV64/SV39



Next, I will be working on cleaning up the code, writing some tutorials how to build and run seL4 on different RISC-V platforms, fixing bugs and adding more features.

References