Tuesday, December 15, 2015

RTEMS port for RISC-V, with/without seL4 support





This is a brief update about RTEMS port progress to RISC-V.

RTEMS port for RISC-V architecture (currently riscv32) runs Hello World and Ticker (with sim timer), on both Spike simulator and seL4 microkernel (two cores). The github repo of the port is here [1].






There are two BSPs currently:

1) riscv_generic: This BSP is intended to run in Machine mode, and has been tested on Spike.

To run it configure and build RTEMS with:

$ ../rtems/configure --target=riscv32-rtems4.12 --disable-posix --disable-networking --disable-itron --enable-rtemsbsp=riscv_generic

$ make

Command to run on Spike:

$ spike --isa=RV32 riscv32-rtems4.11/c/riscv_generic/testsuites/samples/ticker/ticker.exe

2) riscv_seL4: This BSP assumes it runs with support of seL4 microkernel, and it runs in Supervisor mode (on another core). seL4 application would allocate and map memory for it from its untyped memory (userspace), before off-loading it to another core.

To run it with seL4, you need to get seL4-rtems project first, configure it, but before building seL4 two shell variables have to be exported so that seL4 can know about where/which RTEMS image to load.

$ export RTEMS_IMAGE="Absolute path to the RTEMS .exe image"
$ export RTEMS_IMG_NAME="Name of the RTEMS image you would use, i.e hello.exe, ticker.exe"

For how to build/run seL4-rtems project follow the exact same instructions here [2] with the only difference replacing:

repo init -u https://github.com/heshamelmatary/sel4riscv-manifest.git

with

repo init -u https://github.com/heshamelmatary/sel4riscv-rtems-manifest.git

Which fetches the seL4-rtems project.

And finally run both seL4 and RTEMS on Spike:

$ spike --isa=RV32 -p2 images/sos-image-riscv-spike 

References 


Monday, October 12, 2015

A talk about my GSoC project with lowRISC - ORCONF Conference 2015


It was great to give a talk about my Google Summer of Code project with lowRISC at the fourth instance of ORCONF conference held this year in CERN, Geneva, Switzerland. ORCONF is concerned with open-source digital design hardware and embedded systems, motivated by the great success of open-source software. lowRISC, my GSoC organization, which aims to produce a fully open hardware system, was one of the participant organizations over there.

During my talk at ORCONF 2015
The conference was of great success, having about 30 talks ranging from physics, up to software. There was also an interesting discussion about how to adapt open-source software licenses to hardware designs.

lowRISC, based in the University of Cambridge, has first participated in GSoC this year as an umbrella organization to include open-source projects like: RISC-V, seL4, Rump Kernels, jor1k and YosysJS. I was fortunate enough to be one of the three students who have been accepted to work with lowRISC out of 52 GSoC applicants there. This gave me the chance to have a great rewarding experience working on Porting seL4 to RISC-V/lowRISC.

During GSoC’15 program, I had the chance to work on the world's first formally verified operating system kernel—seL4, which is also open-source, to port it to run on lowRISC/RISC-V that are open hardware architectures. The project also implied some coding with the open-source muslc C library. I even worked on some digital design tasks out of curiosity. My mentor, Stefan Wallentowitz, and the lowRISC organizers: Alex Bradbury and Robert Mullins have been of great help even after GSoC has ended.

The outcome of the project was good enough to present about at ORCONF. After my talk (see the slides), I got some positive feedbacks and future ideas, had interesting discussions, and spoke with people who want to build on, and make use of, my project.

Right after I returned back to York, I received my GSoC T-Shirt and certificate.

GSoC'15 T-Shirt and certificate 
I’d like to take this opportunity to thank GSoC, lowRISC and ORCONF organizers, and I am looking forward to continuing to work/participate with them.

Saturday, July 25, 2015

[HOWTO] Build and run seL4 on RISC-V targets

This post gives instructions how to build seL4 to run on RISC-V targets (currently Spike simulator and Rocket Chip/FPGA). The default, and currently only, application is SOS [1] which is a simple operating system running on top of seL4. This means other simple applications can be developed based on this seL4/RISC-V port.

Prerequisites

The development environment is Linux, you need the following tools installed before pursuing with the build/run process:
  • riscv32-unknown-elf- [2]
  • Spike  [3]
  • fesvr [4]
  • Python
  • git
  • gpg
Optional (If you want to build/run seL4 on Rocket Chip/FPGA):
  • Xilinx/Vivado 
  • Scala
  • Chisel

Build/Run seL4/RISC-V

Assuming all the previous packages are installed, the build system (and steps) are exactly the same as of other seL4 projects here [5], but it uses my own repos because the port is not upstream (yet).

1- Get repo
mkdir -p ~/bin
export PATH=~/bin:$PATH
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod a+x ~/bin/repo
2- Fetch seL4/RISC-V project
 
The following commands fetche seL4 microkernel, SOS, tools, and all the libraries required to build a complete seL4/RISC-V system (currently SOS project):

mkdir seL4riscv
cd seL4riscv
repo init -u https://github.com/heshamelmatary/sel4riscv-manifest.git
repo sync 
3- Build seL4/RISC-V project
 
The default project is SOS that runs on Spike, RV32 mode, SV32 memory system. First type:
make riscv_defconfig
Make sure ROCKET_CHIP option is  UNCHECKED:

make menuconfig
seL4 kernel > seL4 System
 make
You should find the sos image under images directory.

SOS binary images

 4- Run seL4/RISC-V (SV32) on Spike (and jor1k):

Running SOS on spike is easy, just type the following command and you should see some interesting output (for more details about SOS see [6]):

make simulate-spike

SOS running on seL4 on RISC-V 

seL4/RV32/SV39 (the same image output from the previous steps) can run on jor1k [8], however jor1k for RISC-V is still under development, and may not work properly.

* Build and run seL4/RISC-V (SV39) on Spike/Rocket

Building seL4 for RV64 follows almost the same previous steps, the differences are followed:

1- Assuming you got all the required repos (see steps 1 and 2 above), change the kernel branch to point to sel4Rocket:

cd kernel/
git checkout sel4Rocket
cd ..

2- This time make sure ROCKET_CHIP is CHECKED

make riscv_defconfig
make menuconfig

ROCKET_CHIP config option checked
Save and exit.

3- build and run seL4/RV64/SV39

make 
make simulate-spike64

The same seL4/SV39 image can run on Rocket Chip on FPGA. Just follow the instructions how to build the Rocket Chip on FPGAs [7]. Note that you have to build all the FPGA-related components along with the software tools (zynq-fesvr)  from scratch to get the latest privileged-spec compliant tools (the prebuilt images are not updated).

If you've followed the previous steps and found any issues, just let me know. Your feedback, bug reports, feature-addition requests are welcomed.

References


[1] seL4 on RISC-V is running SOS (Simple Operating System)
[2] https://github.com/riscv/riscv-gnu-toolchain/
[3] https://github.com/riscv/riscv-isa-sim
[4] https://github.com/riscv/riscv-fesvr
[5] http://sel4.systems/Download/
[6] http://www.cse.unsw.edu.au/~cs9242/14/project/framework.shtml
[7] https://github.com/ucb-bar/fpga-zynq
[8] http://jor1k.com/jor1k/demos/riscv.html

Friday, July 17, 2015

seL4 runs on Rocket Chip (RISCV/FPGA)


Abstract

After running on Spike simulator, seL4 can now run on the latest up-to-date version of Rocket Chip code on FPGA, the first hardware platform that seL4/RISC-V port can run on. Moreover, seL4 runs on the online jor1k emulator [1]. This can be considered as a starting point for both RISC-V and seL4 to experiment some new security-related and/or scalability solutions based on the flexibility to easily modify the hardware according to seL4 requirements (or vice versa), given that both are open-source, and under research development.

Details

Previously seL4/RISC-V was only running on RV32 (RISC-V 32-bit mode) which assumes SV32 (Page-Based 32-bit Virtual-Memory Systems) to work on. SV32 was only supported by Spike simulator, and that's why seL4 could, previously, only run there. The 32-bit seL4 has been progressing till the point that it is able to run, not only a hello world application, but it can run another simple operating system above it [2] that can fork other applications. That seems to be a good progress, but it would be better to have it running on a real hardware.

The issue was that the only open-source RISC-V hardware is currently Rocket Chip, which doesn't support 32-bit (SV32) mode, only RV64 (SV39 and SV48 virtual memory systems that run only on RISC-V 64-bit mode), and since there's no 64-bit support on seL4 (yet), it would have been hard and time-consuming to re-factor the entire seL4 code to run 64-bit code (including pointers, variables, data structures, scripts, etc).

How does 32-bit seL4 run on RV64/SV39?


The workaround was to keep running 32-bit seL4, but with modifications to the RISC-V low-level target-dependent MMU handling code to run on RV64/SV39 memory system without touching the target-independent seL4 code. This was possible because both RV32 and RV64 execute 32-bit fixed instructions. So basically, only the memory configuration was required (apart from HTIF code) to be modified to make this step possible, including initialization and page-tables layout. This means that seL4 is still being built and compiled using riscv32-* toolchain.

The seL4 components that had to be changed are:
  • vspace.c
  • elfloader
vspace.c is an architecture-dependent core file (for all seL4 ports) for MMU handling in seL4 microkernel. A new file, vspace64.c was added to run on RV64 mode. The difference between the 32-bit version is that the 4KiB pages should now follow 3 levels page-tables implementation instead of just 2 levels (SV32).

elfloader is just statically mapping the seL4 microkernel image at 2-level page-table, 2MiB pages granularity, while it maps seL4 at 4MiB granularity (just one level page-table) on SV32
 
RV32/SV32 implementation of seL4 can cover 4GiB address space, while RV64/SV39 extends this to 2^9 GiB. seL4 port only uses 256 MiB, so this amount of SV39 memory wasn't needed. This made it simpler by using only two entries in the first-level page-table: one for the first GiB (reserved for applications use), and the second one for the kernel page-tables. The first-level page-table (AKA page-directory) is then shared between all applications, but write accesses is only  exclusive to seL4 microkernel. New applications (address spaces) are just allocating/filling 2nd-level (and if necessary 3rd-level to provide 4KiB pages) page-tables without worrying about the first level. During context switches, address space change is done by writing the address of the second-level page table (allocated by applications) into the first entry of the page-directory rather than updating the sptbr register. This solution has a limitation of restricting application address space mapping to the first GiB virtual address range, but it has the benefit of optimizing both memory and time. Memory optimization is achieved by the fact that there is no need to allocate 3-level page-tables when creating a new task, only two (or even one) as with SV32. From timing perspective, seL4 is copying the kernel mapping for each created task, this is no longer needed since this kernel mapping lies on a separate 2nd entry, covering the 256 MiB of the second GiB virtual memory of the page directory. The SV39 mapping of seL4 is shown in the next figure.

seL4 mapping on RV64/SV39



Next, I will be working on cleaning up the code, writing some tutorials how to build and run seL4 on different RISC-V platforms, fixing bugs and adding more features.

References

Monday, June 8, 2015

seL4 on RISC-V is running SOS (Simple Operating System)

SOS running on seL4 on RISC-V

Abstract 


Following the first status update of my project (Porting seL4 to RISC-V), this post reveals more updates, most notably, the port is now mature enough to run SOS (Simple Operating System) which is recommended by the seL4 getting started guide [1] [2] to learn about seL4 programming. Given that SOS is used part of an advanced operating system course offered by UNSW and currently only runs on Sabre Lite ARM-based board, seL4 on RISC-V can be considered the second supported platform for SOS, and the first all-open-source seL4 system, providing that seL4 (and its components), RISC-V ISA, Spike simulator (and hopefully soon Rocket-Chip/lowRISC on FPGA) are all open-source.

Details 


Running SOS on seL4/RISC-V platform proves that the port is continually making a steady progress, and asserting that most of seL4/RISC-V API and internals are known to work fine. As mentioned before, a seL4 microkernel port itself wouldn't be interesting without some use cases. So, rather than unit testing each seL4 function itself, I preferred to port an existing (and interesting) use-case like SOS, that's entirely dependent on the seL4 API, and heavily utilizes many of its features. This gives me a better understanding of how a real-world seL4-based application would need/act, and how the seL4 microkernel implementation should react, debugging from the very highest level of a seL4 system (applications running on SOS which in turn runs on seL4) down to the very lowest-level of RISC-V hardware implementation. So what's SOS, and what was needed to run it on the seL4/RISC-V port?

What's SOS


"simple operating system (SOS) is a server running on top of the seL4 microkernel. The SOS server is expected to provide a specified system call interface to its clients (Specified in libs/libsos/include/sos.h)." [2] The SOS framework is described in the following figure.

SOS framework


The components shown in the picture above are:
  • Harware: The hardware described in our case is the RISC-V platform. Currently only the Spike simulator is supported.
  • seL4 microkernel: this is the seL4 RISC-V port of the kernel (and the third port after IA-32 and ARM). It provides the functionalities needed to run our SOS project in the sort of memory management, scheduling, IPC, etc.
  • SOS: a stub operating system running on top of seL4 microkernel. It's intended to be developed and enhanced by students and/or people who are interested to learn about seL4. SOS initializes a synchronous endpoint capability for its clients/applications to use for communication. Interrupts are delivered using an asynchronous endpoint (seL4 has two types of endpoint capability: synchronous and asynchronous).
  • tty_test: it serves as a simple application running on top of SOS that simply prints out a hello word message. 
The application level would need to issue system calls to SOS using the seL4 endpoint capabilities. An example of such a system call is tty_test application requesting (from SOS) some data to be printed out. SOS on the other hand monitors the system call requests from its clients (using seL4_Wait system call), serves it, and sends replies. Refer to [3] to get more details about the SOS framework.

What's needed to support running SOS


Other than the seL4 microkernel internals, almost all of the current seL4 user-level libraries had to be supported to build SOS and its applications, the following picture is captured from the high-level Kconfig file of the project.

Project Kconfig file

To be able to build/run SOS, the following components are involved:
  • seL4 microkernel, it now supports memory management capabilities, context switch, traps from user applications, and a lot (than what has been discussed here) more architecture-dependent functions were implemented.  
  • libseL4: This is the user-level library for applications to deal with seL4 microkernel via system calls. It defines the format of the system calls, kernel objects definitions, user-level context and it exposes them all to the user.
  • libmuslc: The C library that seL4 and its libraries depend on. It has been ported to RISC-V part of this project, and now it's working pretty fine as expected.
  • libsel4muslcsys: A minimal muslc implementation for the root task to bootstrap, it provides stdio related system call handlers and it's part of the bootstrap procedure of the root task, defining the system call table and entry point for muslc-based applications.
  • libplatsupport: Some platform related functions (BSP) for seL4 supported platforms. For example serial driver initialization and console driver functions for a given board are provided there. libsel4platsupport depends on it. I had to add Spike platform with very basic implementation just to get over build dependencies.
  • libsel4platsupport: For RISC-V it has to be ported to provide the bootstraping and the exe entry point __sel4_start for the root task. It gets the boot frame address from the seL4 microkernel, constructs the stack vector as muslc expects, and then jumps to the normal muslc _start entry, enabling it to populate the libc environment's data-structures with its details, initializes TLS, files and stdio handlers, etc. Finally the muslc task bootstrap procedure jumps to the user's main() function, or the root task, which in our use case is SOS.
  • libcpio: used by SOS to parse the cpio archive, searching for user binaries.
  • libelf: This one is used by SOS to parse the ELF binaries extracted from the cpio archive. Hence SOS can read the ELF's section headers, and do the loading/mapping consequently.
  • libsel4cpace: a library provided to abstract away the details of seL4 CSpace management, this library had to also be ported for RISC-V. It's used by SOS to construct tasks' CSpace.
  • mapping: SOS comes with mapping.c file that's needed in conjunction with elf.c to load/map the user ELF binaries. It's ported to RISC-V and it invokes  the newly provided RISC-V system calls like seL4_RISCV_Page_Map and seL4_RISCV_PageTable_Map
Other libraries had to be modified to be aware of the new RISC-V architecture (CONFIG_ARCH_RISCV) and just modified to be built, again to get over other required libraries dependency.

seL4/SOS bootstrap procedure

What's next 

Next I'll be working on 64-bit support, IRQ handling, seL4test project, and see how we can take the port to the FPGA-level. The project repos are listed below [4] [5].

References


[1] Getting Started with seL4
[2] Advanced Operating Systems Course | Project: A Simple Operating System
[3] SOS framework
[4] [Github] seL4 RISC-V por
[5] [Github] seL4 project containing SOS

Sunday, May 24, 2015

Porting seL4 to RISC-V | Status Report No.1

Introduction 


This year I am participating in GSoC with a new umbrella organization called lowRISC aiming to produce a completely open-source SoC (System-on-Chip). lowRISC is based on the new open RISC-V ISA, designed by UC Berkeley. I'll be performing a complete RISC-V port of the new formally-verified microkernel seL4.

Details

 The project basically involves working with seL4 and RISC-V. The next section will introduce some related details about each.

RISC-V

RISC-V is an open ISA, which is designed to help with computer architecture education and research. It supports both 32-bit and 64-bit modes and is powerful enough to run Linux. UC Berkeley team has implemented this ISA (64-bit RISC-V Rocket core) and showed off some interesting comparison between it and ARM Cortex-A5. 

64-bit RISC-V Rocket Chip and ARM Cortex-A5 comparison [1]

QEMU and Spike are the main simulators that enable running/debugging RISC-V software. 

Recently, a new draft of the RISC-V privileged ISA specification has been released [2], which describes rich and nice features. Notably, the draft introduces new four modes that RISC-V can run at (before that, there were only supervisor and user modes). The four modes are user, supervisor, hypervisor (however it's not implemented yet) and machine mode. This new design takes the RISC-V architecture to a new level where virtualization can be supported and easily researched.

seL4

seL4 is a new open-source L4 microkernel developed by NICTA and now owned by General Dynamics C4 Systems. It gained its popularity being "The world's first operating-system kernel with an end-to-end proof of implementation correctness and security enforcement is now open source." seL4 developers believe that it's the state-of-art microkernel currently. The following figure shows the history of microkernels in general and their implementations.

L4 microkernels history [3]


L4 simplicity concept has been greatly achieved in seL4 given that it has about 10K lines on C code, compared to Fiasco.OC which has 36K lines of C/C++ code.

Currently seL4 is ported to only two architectures: ARM and IA-32. Only the ARM port is formally verified, and both support only 32-bit implementation, however the 64-bit implementation is still a work in progress. The IA-32 port supports booting in multi-kernel mode unlike the ARM port.

seL4 microkernel itself wouldn't be of much interest without user-land applications and libraries. There are many libraries and projects that build on seL4 microkernel:
  • verification, the seL4 proofs.
  • seL4test, a test suite for seL4, including a Library OS layer.
  • CAmkES, a component architecture for embedded systems based on seL4. See the CAmkES pages for more documentation about CAmkES.
  • VMM a componentised virtual machine monitor for ia32 platforms using Intel VT-X and VT-D extensions.
  • RefOS, a reference example of how one might build a multi-server operating system on top of seL4. It was built as a student project.

seL4 on RISC-V

Porting seL4 to RISC-V needs a knowledge of both seL4 microkernel and RISC-V design/implementation. The project aims to perform a complete port of seL4 microkernel that enables some of the previously mentioned projects to run on it, mainly, the seL4test project that has over 120 tests asserting the seL4 microkernel API, features and behaviour. There have been some implementation trade-offs regarding the project, described below.

32-bit or 64-bit?

Both! As already mentioned, seL4 only supports 32-bit currently, on the other hand, RISC-V has been focusing on 64-bit implementations right from the start with a little support for 32-bit; UC Berkeley team has only 64-bit Rocket chip and there is no 32-bit hardware implementation so far (except for some simple educational repos). Luckily, Spike has recently supported 32-bit mode (with a new --isa flag). It will be easier to port seL4 for 32-bit architecture trying to follow/imitate ARM/IA-32 ports. 64-bit implementation would be more challenging as most of the seL4 data structures and scripts assume 32-bit environment. As there's no 32-bit hardware implementation of RISC-V yet, the port wouldn't have the chance to run on real hardware. Hence, we decided to start with 32-bit porting, make it run on Spike first, and from there we can evolve to 64-bit that can run on Spike, or Rocket Chip--hopefully both!

Rocket Chip vs. Spike and/or QEMU

Again, this is closely related to the previous trade-off of 32/64 bit implementations. So if we ended up with a 64-bit seL4 working on Spike, this can easily run on the 64-bit Rocket Chip. Rocket Chip and Spike are up-to-date with the latest ISA privileged specification, however QEMU isn't. Consequently, we chose to work with Spike as the main simulator. Spike is also more similar to the hardware implementation in that it's simulating the HTIF interface and can communicate with the shared library riscv-fesvr (front-end server) like the Rocket Chip.
 

Working in which mode?

The latest privileged specification introduces 4 modes that RISC-V software can run in. Conceptually, seL4 might run in any of the three privileged modes separately, or even two or three of them simultaneously. The next figure shows the possible seL4, guest OSes and applications configurations regarding to which modes they can run at.

seL4 in which RISC-V mode trade-off

   
The number of which modes to run seL4 microkernel in was narrowed down to two by the fact that there is no hypervisor implementation yet. These two modes are: machine (M-mode), and supervisor (S-mode) modes. The M-mode supports physical access control and Base-and-Bounds checking, i.e, no mapping or address translation (SV32, SV39 and SV48), only S-mode does. seL4 microkernel on the other hand expects that it would run in an address-translation-based mode, and would map its kernel image, IPC buffers, bootframe and other areas of memory during bootstrap. So we followed the current seL4 ports for now to work in S-Mode.

Loading the image(s) and mapping pages

The bare seL4 system basically consists of: 1) the kernel image, 2) applications. Current ARM and IA-32 seL4 ports differ in the way they load the kernel image and applications. Since IA-32 port can boot in multi-kernel mode, it loads the images in way similar to grub. So the kernel is the first part that takes control of the physical resources, and it loads/maps the application images itself. The ARM port behaves differently in that it archives the kernel and application images in cpio format. There's a separate elfloader tool that reads the ELF images from the cpio archive, loads it to the available physically-adjacent memory, sets up the VM environment and finally maps the ELF images according to their ELF's section VMAs. Hence, the elfloader is the first to take control of the physical resources, and then it passes control to the kernel (which works in a VM environment right from the start) with some information passed to it about the loading addresses of the kernel image itself and the user image(s). The final image for the seL4/ARM system then contains: 1) elfloader tool, 2) libelf, 3) libcpio, 4) kernel image and 5) user applications. We followed the ARM port as it's more hardware agnostic, and as a start the RISC-V wouldn't need to support multi-kernel mode.

What has been done so far

So far, the basic microkernel port can bootstrap and jump to the user image on Spike in 32-bit mode working only in S-mode. 

seL4 microkernel running on Spike


To be able to achieve this, I had to work on the following seL4 components.
  • libmuslc: libelf depends on libmuslc. I performed a very basic port of musl c library to RISC-V architecture, enough to build it successfully and produce the .a library.
  • libelf: This one is portable and architecture-independent. It has to be included part of the elf loading process. 
  • libcpio: like libelf, libcpio is also architecture-independent and is used to read the cpio archive containing the kernel image and user images.
  • elfloader: This tool is developed by seL4 team for the ARM port, I had to port it to RISC-V. It has to work in M-mode and it's acting as riscv-pk, that is, any system calls from seL4 microkernel are redirected to elfloader code, which handles it and returns (apart from its main purpose which is loading the kernel/user images). elfloader currently only supports write and exit system calls (to be able to get some printf output and exit the spike simulator).
  • seL4 microkernel: The project is mainly about the seL4 microkernel. The port basically followed ARM port and even a lot of code is copied from it. seL4 microkernel runs in S-mode right from the start as mentioned previously. Some architecture-level capability data structures had to be modified according to the RISC-V ISA, and the low-level RISC-V VM handling code is now implemented to map the kernel image, kernel frames, initial task and user images properly. 
  • Build system: The build system for seL4 projects is the Linux Kconfig/Kbuild build system. The existing Kconfig/Kbuild files had to be modified to allocate a new entry for RISC-V architecture with a new Spike platform (that runs on Spike simulator). New riscv_defconfig, project-riscv.mk, makefiles and other files were added to enable building a complete seL4/RISC-V system (elfloader, libcpio, libelf, seL4 mircokernel, user image) like in seL4tests project and other seL4 projects.

Next, I'll be working on the seL4 system calls API, timer and IRQ support and 64-bit mode. You can follow my blog to get more updates about the project as well as my github repo(s) [4] [5] [6].

References


[3] Elphinstone, Kevin, and Gernot Heiser. "From L3 to seL4 what have we learnt in 20 years of L4 microkernels?." Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles. ACM, 2013.

Sunday, February 22, 2015

[HOWTO] 5- Run RTEMS on QEMU

If you're a QEMU fan, and want to give it a try running RTEMS, then this post is for you. Make sure you've followed all of the previous instructions described here [1] [2] [3].

There are some RTEMS sim scripts you can use that do the magic of running simulators for you (but then you'll have to get sim-scripts repo and run it from there). I won't go through sim-scripts here, just do it manually as it's just one command! For more instructions you can peek into the or1ksim README file.

$ vim $HOME/development/rtems/src/rtems/c/src/lib/libbsp/or1k/or1ksim/README
or1sim BSP README


Run

Let's run some interesting sample app called capture now that you can have some interactive fun with.
$ qemu-system-or32 -serial mon:stdio -serial /dev/null -net none -nographic -m 128M -kernel $HOME/development/rtems/build/or1k-rtems4.11/c/or1ksim/testsuites/samples/capture/capture.exe

RTEMS or1ksim/capture.exe running on QEMU

You can try running other samples under sample directory.

References


[HOWTO] 4- Run RTEMS on or1ksim

At this point, you must be enthusiastic enough to see your effort building the toolchain, simulator and RTEMS [1] [2] [3] coming into action, I mean you can now see RTEMS executing literally!


or1ksim simulator needs a script file describing the system architecture, peripherals and CPU version and configurations. RTEMS or1ksim BSP is shipped with such a file that you can see at its source directory.

$ vim $HOME/development/rtems/src/rtems/c/src/lib/libbsp/or1k/or1ksim/sim.cfg
RTEMS sim.cfg
What concerns us about this file is that it sets the UART baud rate and how the output should appear and some debugging options (in case you want to debug).

Run

Now you can run RTEMS hello.exe sample (resulted from your previously built RTEMS) by typing this command:

$  or1k-elf-sim -f $HOME/development/rtems/src/rtems/c/src/lib/libbsp/or1k/or1ksim/sim.cfg $HOME/development/rtems/build/or1k-rtems4.11/c/or1ksim/testsuites/samples/hello/hello.exe

RTEMS hello.exe running on or1ksim
Congratulation! you made it!

Debug

If you want to debug using GDB, you can edit the sim.cfg file to enable debugging, just open it and set enabled option part of the debug section to 1.


As you may have already guessed, you then have to start or1ksim server which will listen to GDB clients on port 50001. Now you do it.

GDB/or1ksim Debugging

References

 

[HOWTO] 3- Build RTEMS for OpenRISC

If you have not followed the previous tutorial posts describing how to set up your development environment, go ahead and do it [1] [2], I'll be waiting for you to come back again to start building RTEMS for OpenRISC as described here. Currently there is only one RTEMS BSP you can build which runs on both or1ksim and QEMU. Now let's begin.

1- Checking out RTEMS source

$ cd $HOME/development/rtems/src
$ git clone git://git.rtems.org/rtems.git
Cloning into 'rtems'...
remote: Counting objects: 465756, done.
remote: Compressing objects: 100% (81067/81067), done.
remote: Total 465756 (delta 376768), reused 462957 (delta 374743)
Receiving objects: 100% (465756/465756), 63.69 MiB | 83.00 KiB/s, done.
Resolving deltas: 100% (376768/376768), done.
Checking connectivity... done.
2- Bootsrap

The Make build systems requires that you run bootstrap command to generate preinstall.am and Makefile.in files for the first time you download and build RTEMS.

$ cd rtems
$ ./bootstrap -p
$ ./bootstrap

3- Configure and build

Create a new directory for build and configure/build RTEMS for or1ksim BSP.

$ cd ../../
$ mkdir build
$ cd build/

$ ../src/rtems/configure --target=or1k-rtems4.11 --enable-rtemsbsp=or1ksim
$ make

This will generate default samples executable like hello world and ticker, which you can run on or1ksim and QEMU as described on the following tutorial posts.

References

[1] [HOWTO] 1- Build or1k-rtems* toolchain via RSB
[2] [HOWTO] 2- Build or1k simulator(s)

Saturday, February 21, 2015

[HOWTO] 2- Build or1k simulator(s)

So, assuming you're coming from the previous post [HOWTO] 1- Build or1k-rtems* toolchain via RSB and have already installed the or1k toolchain for RTEMS, then you're ready to build RTEMS. But before that, you need OpenRISC simulator to run RTEMS on; this post illustrates how to get some simulator(s) built.

or1ksim

or1ksim the the main or1k simulator, and the one that can run Linux and RTEMS. For more details about or1ksin refer to its web-page [1]. Now, you're supposed to have RSB installed from the previous post [2], you can install the latest or1ksim development code from github as simple as typing just one RSB command (RSB FTW)!

1- Build
$ cd HOME/development/rtems/src/rtems-source-builder/bare/config
$ ../../source-builder/sb-set-builder --log=l-or1ksim.txt --prefix=$HOME/development/rtems/4.11 devel/or1ksim
RTEMS Source Builder - Set Builder, v0.5.0
Build Set: devel/or1ksim
config: devel/or1ksim-1.1.0.cfg
package: or1ksim-1.1.0-x86_64-linux-gnu-1
Creating source directory: sources
download: https://github.com/openrisc/or1ksim/archive/or1k-master.zip -> sources/or1k-master.zip
 redirect: https://codeload.github.com/openrisc/or1ksim/zip/or1k-master
downloading: sources/or1k-master.zip - 2.1MB     
warning: or1k-master.zip: no hash found
building: or1ksim-1.1.0-x86_64-linux-gnu-1
installing: or1ksim-1.1.0-x86_64-linux-gnu-1 -> /home/hesham/development/rtems/4.11
cleaning: or1ksim-1.1.0-x86_64-linux-gnu-1
Build Set: Time 0:00:42.205439

2- Check



Now you want to be sure that this "one command" fetch, build, install really works!

$ ls -alh $HOME/development/rtems/4.11/bin/ | grep sim*
-rwxr-xr-x 1 hesham disk 1.1M Feb 21 16:25 or1k-elf-sim 

QEMU

RTEMS can also work on QEMU, if you do not already have it, you can simply "RSB" it. It'll do full QEMU build for all the architectures supported (that's why it'll take a lot of time).

1- Build


$ cd HOME/development/rtems/src/rtems-source-builder/bare/config
$ ../../source-builder/sb-set-builder --log=l-qemu.txt --prefix=$HOME/development/rtems/4.11 devel/qemu
2- Check




Now that you've more tools than you need, you can proceed to the next posts describing how to build and run RTEMS on one of the previously mentioned simulators.

References


[1] http://opencores.org/or1k/Or1ksim

[HOWTO] 1- Build or1k-rtems* toolchain via RSB

This is the first post of the HOWTO tutorial to illustrate how to setup your development environment and run RTEMS on OpenRISC/or1ksim.

Before beginning some environment variables have to be set up assuming that you're using Linux. Most of the instructions here are quoted from RSB page [1]

1- Setup

This is where you executables go.

$ export PATH=$HOME/development/rtems/4.11/bin:$PATH

2- Create directory for RSB source and clone it

$ cd
$ mkdir -p development/rtems/src
$ cd development/rtems/src
$ git clone git://git.rtems.org/rtems-source-builder.git
$ cd rtems-source-builder

3- Build toolchain for or1k-rtems
$ cd rtems
$../source-builder/sb-set-builder --log=l-or1k.txt --prefix=$HOME/development/rtems/4.11 4.11/rtems-or1k
If the previous command failed, you may check if all necessary packages are installed or not and if not install it yourself, for more details refer to [1]

The previous command will fetch all the toolchain sources from upstream, build and install it, so it'll take sometime. On my Intel-i7 Fedora x86_64 system it took about half an hour.

Build Set: Time 0:36:53.247605

4- Checking

After the installation finishes you can check the prefix directly and you should see that the executables have been already installed there. This is what I got when typed the following command:


Congratulations! You're now ready to build RTEMS for OpenRISC!

References


[1] https://ftp.rtems.org/pub/rtems/people/chrisj/source-builder/source-builder.html

Thursday, February 19, 2015

Thoughts on Supporting Rump Kernels on RTEMS

Introduction 

So, once I had heard about Rump kernels [1] from Gedare Bloom (one of RTEMS maintainers), I started to do some research about it, and whether RTEMS can have a support for such a new architecture. Rump kernel is a way to run unmodified NetBSD kernel drivers virtually anywhere. So, for a platform that can support Rump kernelsa developer can just pick up some NetBSD drivers (that have been tested and proven to work properly), compile and link it without any modifications to the source code or the host kernel itself. Moreover, these drivers can be even upgraded from NetBSD upstream without any significant effort. So, what's exactly Rump kernel, Anykernel and a so-called platform?

Rump kernel is not a fully-featured OS neither a complete virtual machine like KVM or VirtualBox. It's a minimal thin implementation that enables the host platform (see the platform section) to emulate the system calls layer that NetBSD drivers expect/call. Rump kernel is hardware-agnostic, meaning that it does not depend on specific hardware features like virtualization and cache-coherence. For example, kernel drivers need some way of allocating memory (using rumpuser_malloc), it doesn't really matter whether this is a virtual/logical memory (allocated address space using page-table), or fixed physical addresses; it depends on the platform, what concerns Rump kernels is to freely allocate, use and free this area of memory. That's, Rump kernels try to make use of the underlying software platform features as possible as could be in parallel with giving the illusion (and of course working work-arounds) to the drivers that they get what they need! At this point you may be wondering about the structure of Rump kernels and how it depends/relates to the platform. The following figure [2] may make it clearer. Please note that libc and the layers above it are optional.

Figure 1: Rump Kernel Environment

As you can see, the Rump kernel support is stacked. At the top of the stack comes the application that can be POSIX-compliant. In the next section some of these stack components are illustrated and how RTEMS (as an example platform) can and Rump kernels work together.

Platform


So what's the platform? Basically, the platform can be anything like a raw hardware, Virtual Machines or an OS like Linux. Actually, there are currently some implementations for such platforms. So, Rump kernels can run on some POSIX userspace like "Linux, Android, NetBSD, FreeBSD, OpenBSD, Dragonfly BSD, Solaris (+ derivates) and Windows (via Cygwin)" [3]. There are some implementations that run on bare-metal machines like KVM,VirtualBox or hypervisors like Xen. Genode OS has been modified to support Rump kernel [4] and similarly Minix. So, can RTEMS be the next platform? The simple answer is yes!

RTEMS is an POSIX-compliant RTOS, so with a small effort, Rump Kernel can run above this RTEMS/POSIX environment. However, it would make more sense from performance, control and code density perspectives to discard this POSIX dependency and write the whole hypercall layer (see figure 1). Userspace POSIX platforms here [5] have another POSIX library (userpsace libraries on the previous figure) as well as the host POSIX library. As the authors of Rump Kernels say, it's enough for a platform to just implement the hypercall layer to support the whole Rump kernel stack. So, theoretically, if RTEMS implemented this very thin ~1000-lines-of-code hypercall layer, all other NetBSD code can be imported, providing NetBSD drivers, libc, and even unmodified POSIX library.


The hypercall (AKA rumpuser) layer [7] is divided into basic and IO operations. The complete interface can be found here [6]. Almost all of the functions mentioned in that link can be implemented by using/wrapping existing RTEMS features. Some of the interfaces are mentioned below.

Memory Allocation

int rumpuser_malloc(size_t len, int alignment, void **memp
void rumpuser_free(void *mem, size_t len)
These functions can easily be mapped to RTEMS libcsupport implementation of (malloc/free). Other memory managers like Partition and Region managers can also be used.

Files and IO

int rumpuser_open(const char *name, int mode, int *fdp)
int rumpuser_close(int fd)
int rumpuser_getfileinfo(const char *name, uint64_t *size, int *type)
void rumpuser_bio(int fd, int op, void *data, size_t dlen, int64_t off,
     rump_biodone_fn biodone, void *donearg)
int rumpuser_iovread(int fd, struct rumpuser_iovec *ruiov, size_t iovlen,
     int64_t off, size_t *retv)
int rumpuser_iovwrite(int fd, struct rumpuser_iovec *ruiov,
     size_t iovlen, int64_t off, size_t *retv
int rumpuser_syncfd(int fd, int flags, uint64_t start, uint64_t len) 
The previous IO functions can be implemented by wrapping IMFS and some stubs, most embedded system applications do not need a complete featured file system, but if it's needed, the option of wrapping the correct RTEMS filesystem is still there. How to configure and enable Rump kernel features is an implementation tradoff, but currently rump-posix is doing it by starting a Rump kernel server with a command line flag of which features are needed from the Rump Kernel. For example this command line does such a job (loading a filesystem driver when starting the server)
rumpremote (unix:///tmp/rumpctrlsock)$ ./rumpdyn/bin/rump_server -lrumpvfs unix:///tmp/rumpctrlsock

Clocks

"The hypervisor should support two clocks, one for wall time and one for monotonically increasing time, the latter of which may be based on some arbitrary time (e.g. system boot time). If this is not possible, the hypervisor must make a reasonable effort to retain semantics." [6]
All of the required clock services are provided by RTEMS such as _Watchdog_Ticks_since_boot. RTEMS provides enough time management libraries like Watchdog, Time, Clock manager, Timer benchmark (which may or may not depend on the Clock manager) and CPU Counter (deprecated?). Hence, there are more than enough implementation to support Clocks interface part of the hypercall.

Console output 

"Console output is divided into two routines: a per-character one and printf-like one. The former is used e.g. by the rump kernel's internal printf routine. The latter can be used for direct debug prints e.g. very early on in the rump kernel's bootstrap or when using the in-kernel rou- tine causes too much skew in the debug print results (the hypercall runs outside of the rump kernel and therefore does not cause any locking or scheduling events inside the rump kernel)." [6]
 Both are there!


Threads

int rumpuser_thread_create(void *(*fun)(void *), void *arg,
const char *thrname, int mustjoin, int priority, int cpuidx,
void **cookie)
void rumpuser_thread_exit(void)
int rumpuser_thread_join(void *cookie)
void rumpuser_curlwpop(int enum_rumplwpop, struct lwp *l)

Mainly, all thread management is directly mapped to the host threading implementation. So, when Rump kernel driver creates a thread, the host will actually create this thread and schedule it according to its policy. It does not matter how the host implements threading. For RTEMS, all of these functions can be easily mapped to corresponding ones, no big deal.

Synchronization and Mutexes

Normal mutex operations are provided by RTEMS. There is also a need for Read/Write locks and conditional variables. 

What then?

If we had this hypercall layer on RTEMS, there's no other effort needed. The other BIG NetBSD code can be linked AS IS! At this stage we can try out some NetBSD drivers. We can have some fun by making use of Rump Kernel Remote/Client mode which separates the kernel from the clients (applications). So, for example we can have a bare-metal client connecting to Rump Kernels on RTEMS or vice versa communicating using IPC and TCP/IP. This platform can be setup using some simulators or real hardware. 

One other interesting way is using Rump Kernels to tackle the scalability issues that RTEMS currently faces, providing another solution other than the complex fine-grained locking. We can have cores with attached IO devices that can have Rump kernels on it, and act as servers for other clients (on other cores), communicating together (using inter-processor interrupts, message passing, shared-memory communication or whatever). 

References


[1] http://rumpkernel.org/
[2] Rump Kernels No OS? No Problem!
[3] Rump Kernels Platforms
[4] Genode OS and Rump Kernels.
[5] Userspace (POSIX) Rump kernel.
[6] rumpuser - NetBSD Manual Pages
[7] https://github.com/rumpkernel/buildrump.sh/issues/59