Discussion:
[RFC PATCH 00/50] ARM support
Jean-Philippe Brucker
2014-08-08 12:02:43 UTC
Permalink
Hello,

This patch series is a proposal of the initial 32bit ARM support for
Jailhouse.

I based this port on the Versatile Express platform, allowing it to run
on ARM's system models. Since there is as many different memory maps in
the ARM ecosystem as implementations, some discussions will be needed to
add support for device trees before adding new platform.
For the moment, I did not add any major change to the core or the driver.
I also tested it on an Odroid-XU, but I am not comfortable adding it to
this series, since I used the non-mainline hardkernel tree with some
patches of my own to fix virtualisation support.

This series is NOT an official support from ARM ltd., but the result of
my summer placement, which ends this week. I will continue discussing and
working on it on my own time, using my home address.


A few forewords about virtualisation on ARM:

Since ARMv7 (Cortex-A15), ARM provides hardware virtualisation
extensions in the form of an additional Exception Level. This level uses
a stage-2 level of page tables to partition the guest and allows to trap
sensitive instructions.
It also disposes of its own stage-1 page tables, allowing to use
cacheable and shareable memory types. In addition, the Generic Interrupt
Controller (GIC) provides a way to virtualise all interrupts.

The kernel runs at Exception Level 1 (EL1) and the hypervisor at EL2.
A trap is taken to the EL2 vectors and a single syndrome register (HSR)
allows the hypervisor to dispatch the trap and emulate the trapped
instruction, or inject an interrupt for example.

Implemented features
====================

* Hypervisor enabling and disabling

When it is started at EL2, Linux installs a small vector stub, allowing
an hypervisor to override them and install its own vectors.
This installation is a bit delicate, since the initial jump is done from
the kernel context, that uses an MMU and caches, to a completely bare
environment. Here, the EL2 MMU is immediately configured while inside an
identity-mapped region, and the hypervisor installation may safely
continue.

* Cell creation and destruction

The core does most of the work when partitioning the machine, by using
the paging callbacks for the cell's stage-2 pages.
Parking CPUs is done using a small PSCI implementation: suspended or
stopped cells spin in the hypervisor, waiting for a mailbox to be updated.
When destroying a cell or shutting down Jailhouse, it must use the
platform-specific hotplug features.
All platform features are currently detected using the CONFIG options of
the host kernel.

* Virtual interrupts (GICv2 and GICv3)

All physical interrupts are taken to the hypervisor, and then directly
injected into the cell, using a series of List Registers belonging to the
GIC's interface. Software must also maintain a structure of postponed
interrupts, in case all list registers are in use or for the purpose of
injecting SGIs into another CPU.

Software-generated interrupts (SGIs) are trapped and moderated by the
hypervisor. GICv2 uses memory-mapped accesses to the distributor, whilst
GICv3 uses system registers. After checking the SGI's targets, they are
stored in the CPUs pending structures, and injected using a
synchronisation SGI across the cores.

Private-Peripheral interrupts (PPIs) are dedicated to each core and
don't need moderation. A first attempt is made to directly write the list
registers, and are stored in the per-cpu data if it fails.

Shared-Peripheral interrupts (SPIs) are also directly injected, but are
configured in the global GIC distributor to target specific CPUs.
In this port, they are configured from the cell's bitmap: initially
assigned to the root's first CPU, they are re-routed when a cell is
created or destroyed.
All accesses to the distributor are filtered, to only allow the guests to
configure SPIs belonging them.

Missing features
================

* 64bit support, although this series aims to be abstract enough to ease
the 64bit port.
* Thumb2 host: this was not a priority on my TODO list, but should not be
too difficult to add.
* Hosts using PSCI: I did not have access to a boot-monitor with PSCI.
* Clusters: since the setup code currently uses a simple addition of the
MPIDR to deduce the per-cpu datas location and size, clusters are not
supported yet. entry.S will need to fetch the hypervisor's header to
find out the total number of online CPUs and generate those base
addresses, maybe by filling a hashmap.
* Exhaustive reset of the EL1 environment when starting a cell (Perf,
debug features, float...)
* IRQs greater than 64, because of the current bitmap limitation in the
cells configs. More than one irqchip could be used, but it would be
semantically confusing.
* IRQ remapping, although I understood that support may be added in the
core very soon.
* Clean platform handling, see below.

Points that need more discussions
=================================

* Linux on ARM heavily relies on Device Trees to describe the different
devices available and their features. The best way to provide a clean
device support in Jailhouse would be for the driver to pass the kernel
device tree in the root cell's configuration.
It would allow to find out the GIC and UART addresses, as well as the
platform-dependant hotplug method and mailbox address, if any.

* The debug functions are quite problematic: the hypervisor is entered at
EL1 and cannot guess which IO mapping is used by the kernel for the
serial console. As a result, there is no reliable way to print the first
few messages that happen before EL2 initialisation.
Currently, a wild guess assumes that this remapping is the same as the
one used for earlyprintk.
One solution would be to retain all the messages printed at EL1 in a
buffer, but this goes against the 'debug' nature of this printk.
Another would be to communicate, one way or another, the virtual address
of the UART allocated by the kernel from the driver.


All comments and reviews are welcome.

Thanks,
Jean-Philippe

Jean-Philippe Brucker (50):
arm: build with virtualisation support
arm: hypervisor entry point
arm: provide an interface for accessing system registers
arm: implement some base functions
arm: add SMP barriers and utilities
arm: add IO helpers
arm: spinlock implementation
arm: implement atomic bitops
arm: implement the debug routines for the pl011 UART
arm: hyp vectors installation
arm: implement the paging callbacks
core: add the ability to use arch-specific linker scripts
arm: initialise hypervisor stage 1 MMU
arm: setup stage 2 MMU for the cells
arm: check architecture features
arm: dispatch hypercalls
arm: pass through init_late
arm: GIC initialisation skeleton
arm: GICv3 initialisation
arm: IRQ handling skeleton
arm: store the pending virtual interrupts
arm: GICv3: handle IRQs
arm: read/write the banked registers
arm: skip instructions that fail their condition check
arm: GICv3: filter the guests' SGIs
arm: minimal PSCI implementation
arm: implement the cell creation
arm: GIC: reset the CPU interface before running a new guest
arm: implement the cell destruction
arm: clear the banked and system registers on reset
arm: flush and enable the caches at initialisation
arm: disable caches on cell reset
arm: complete paging invalidations
arm: better error reporting and panic dump
arm: mmio emulation skeleton
arm: attribute virtual IDs to the cell cpus
arm: GIC: filter redistributor accesses
arm: irqchip: add SPI configuration in cell_init and cell_exit
arm: GIC: handle distributor accesses
arm: PSCI emulation
arm: add platform-dependent SMP operations
arm: ignore writes to the ACTLR register
arm: save the linux hyp-stub vectors
arm: irqchip: add hypervisor shutdown
arm: implement hypervisor shutdown
arm: restore kernel on setup failure
arm: GIC: factor some GICv3 functions into gic_common
arm: add support for GICv2
arm: GICv2: handle SPI routing
arm: exit statistics

driver.c | 8 +
hypervisor/Makefile | 5 +
hypervisor/arch/arm/Makefile | 15 +-
hypervisor/arch/arm/caches.S | 88 ++++
hypervisor/arch/arm/control.c | 391 +++++++++++++++++
hypervisor/arch/arm/dbg-write-pl011.c | 24 ++
hypervisor/arch/arm/dbg-write.c | 46 ++
hypervisor/arch/arm/entry.S | 70 +++-
hypervisor/arch/arm/exception.S | 81 ++++
hypervisor/arch/arm/gic-common.c | 440 ++++++++++++++++++++
hypervisor/arch/arm/gic-v2.c | 284 +++++++++++++
hypervisor/arch/arm/gic-v3.c | 412 ++++++++++++++++++
hypervisor/arch/arm/include/asm/bitops.h | 102 ++++-
hypervisor/arch/arm/include/asm/cell.h | 21 +-
hypervisor/arch/arm/include/asm/control.h | 47 +++
hypervisor/arch/arm/include/asm/debug.h | 35 ++
hypervisor/arch/arm/include/asm/gic_common.h | 59 +++
hypervisor/arch/arm/include/asm/gic_v2.h | 121 ++++++
hypervisor/arch/arm/include/asm/gic_v3.h | 267 ++++++++++++
hypervisor/arch/arm/include/asm/head.h | 24 ++
hypervisor/arch/arm/include/asm/io.h | 66 +++
hypervisor/arch/arm/include/asm/irqchip.h | 116 ++++++
.../arch/arm/include/asm/jailhouse_hypercall.h | 5 +-
hypervisor/arch/arm/include/asm/paging.h | 167 +++++++-
hypervisor/arch/arm/include/asm/paging_modes.h | 5 +
hypervisor/arch/arm/include/asm/percpu.h | 60 ++-
hypervisor/arch/arm/include/asm/platform.h | 66 +++
hypervisor/arch/arm/include/asm/processor.h | 167 ++++++++
hypervisor/arch/arm/include/asm/psci.h | 71 ++++
hypervisor/arch/arm/include/asm/sections.lds | 7 +
hypervisor/arch/arm/include/asm/setup.h | 69 +++
hypervisor/arch/arm/include/asm/setup_mmu.h | 78 ++++
hypervisor/arch/arm/include/asm/smp.h | 52 +++
hypervisor/arch/arm/include/asm/spinlock.h | 61 ++-
hypervisor/arch/arm/include/asm/sysregs.h | 170 ++++++++
hypervisor/arch/arm/include/asm/traps.h | 105 +++++
hypervisor/arch/arm/include/asm/uart_pl011.h | 113 +++++
hypervisor/arch/arm/irqchip.c | 331 +++++++++++++++
hypervisor/arch/arm/lib.c | 36 ++
hypervisor/arch/arm/mmio.c | 167 ++++++++
hypervisor/arch/arm/mmu_cell.c | 145 +++++++
hypervisor/arch/arm/mmu_hyp.c | 333 +++++++++++++++
hypervisor/arch/arm/paging.c | 148 +++++++
hypervisor/arch/arm/psci.c | 148 +++++++
hypervisor/arch/arm/psci_low.S | 82 ++++
hypervisor/arch/arm/setup.c | 200 +++++++--
hypervisor/arch/arm/smp-vexpress.c | 73 ++++
hypervisor/arch/arm/smp.c | 84 ++++
hypervisor/arch/arm/traps.c | 334 +++++++++++++++
hypervisor/hypervisor.lds.S | 4 +
50 files changed, 5917 insertions(+), 86 deletions(-)
create mode 100644 hypervisor/arch/arm/caches.S
create mode 100644 hypervisor/arch/arm/control.c
create mode 100644 hypervisor/arch/arm/dbg-write-pl011.c
create mode 100644 hypervisor/arch/arm/dbg-write.c
create mode 100644 hypervisor/arch/arm/exception.S
create mode 100644 hypervisor/arch/arm/gic-common.c
create mode 100644 hypervisor/arch/arm/gic-v2.c
create mode 100644 hypervisor/arch/arm/gic-v3.c
create mode 100644 hypervisor/arch/arm/include/asm/control.h
create mode 100644 hypervisor/arch/arm/include/asm/debug.h
create mode 100644 hypervisor/arch/arm/include/asm/gic_common.h
create mode 100644 hypervisor/arch/arm/include/asm/gic_v2.h
create mode 100644 hypervisor/arch/arm/include/asm/gic_v3.h
create mode 100644 hypervisor/arch/arm/include/asm/head.h
create mode 100644 hypervisor/arch/arm/include/asm/io.h
create mode 100644 hypervisor/arch/arm/include/asm/irqchip.h
create mode 100644 hypervisor/arch/arm/include/asm/platform.h
create mode 100644 hypervisor/arch/arm/include/asm/psci.h
create mode 100644 hypervisor/arch/arm/include/asm/sections.lds
create mode 100644 hypervisor/arch/arm/include/asm/setup.h
create mode 100644 hypervisor/arch/arm/include/asm/setup_mmu.h
create mode 100644 hypervisor/arch/arm/include/asm/smp.h
create mode 100644 hypervisor/arch/arm/include/asm/sysregs.h
create mode 100644 hypervisor/arch/arm/include/asm/traps.h
create mode 100644 hypervisor/arch/arm/include/asm/uart_pl011.h
create mode 100644 hypervisor/arch/arm/irqchip.c
create mode 100644 hypervisor/arch/arm/lib.c
create mode 100644 hypervisor/arch/arm/mmio.c
create mode 100644 hypervisor/arch/arm/mmu_cell.c
create mode 100644 hypervisor/arch/arm/mmu_hyp.c
create mode 100644 hypervisor/arch/arm/paging.c
create mode 100644 hypervisor/arch/arm/psci.c
create mode 100644 hypervisor/arch/arm/psci_low.S
create mode 100644 hypervisor/arch/arm/smp-vexpress.c
create mode 100644 hypervisor/arch/arm/smp.c
create mode 100644 hypervisor/arch/arm/traps.c
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:57 UTC
Permalink
This patch adds the necessary MMU setup code for the cells. They use the
same paging functions as the hypervisor, but their flags are slightly
different.

As an improvement, it would be good to use only two levels of page
tables on 32bit instead of three. This would limit the memory accessible
from EL1 to 16GB instead of the current 256.
This doesn't really matter for the moment: since the core handles virtual
addresses with unsigned longs, LPAE cannot be used.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 2 +-
hypervisor/arch/arm/include/asm/cell.h | 11 ++-
hypervisor/arch/arm/include/asm/control.h | 26 +++++++
hypervisor/arch/arm/include/asm/paging.h | 1 +
hypervisor/arch/arm/include/asm/processor.h | 33 +++++++++
hypervisor/arch/arm/include/asm/sysregs.h | 2 +
hypervisor/arch/arm/mmu_cell.c | 101 +++++++++++++++++++++++++++
hypervisor/arch/arm/setup.c | 29 +++++---
8 files changed, 193 insertions(+), 12 deletions(-)
create mode 100644 hypervisor/arch/arm/include/asm/control.h
create mode 100644 hypervisor/arch/arm/mmu_cell.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index b8cc50b..9bc393e 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -15,7 +15,7 @@ KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))
always := built-in.o

obj-y := entry.o dbg-write.o exception.o setup.o lib.o
-obj-y += paging.o mmu_hyp.o
+obj-y += paging.o mmu_hyp.o mmu_cell.o
obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o

# Needed for kconfig
diff --git a/hypervisor/arch/arm/include/asm/cell.h b/hypervisor/arch/arm/include/asm/cell.h
index 121115c..88fe125 100644
--- a/hypervisor/arch/arm/include/asm/cell.h
+++ b/hypervisor/arch/arm/include/asm/cell.h
@@ -14,12 +14,20 @@
#define _JAILHOUSE_ASM_CELL_H

#include <asm/types.h>
-#include <asm/paging.h>
+
+#ifndef __ASSEMBLY__

#include <jailhouse/cell-config.h>
+#include <jailhouse/paging.h>
#include <jailhouse/hypercall.h>

+struct arch_cell {
+ struct paging_structures mm;
+};
+
struct cell {
+ struct arch_cell arch;
+
unsigned int id;
unsigned int data_pages;
struct jailhouse_cell_desc *config;
@@ -39,4 +47,5 @@ struct cell {

extern struct cell root_cell;

+#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_CELL_H */
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
new file mode 100644
index 0000000..b569cba
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -0,0 +1,26 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_CONTROL_H
+#define _JAILHOUSE_ASM_CONTROL_H
+
+#include <asm/cell.h>
+#include <asm/percpu.h>
+
+#ifndef __ASSEMBLY__
+
+int arch_mmu_cell_init(struct cell *cell);
+int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* !_JAILHOUSE_ASM_CONTROL_H */
diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 251576e..969c71d 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -98,6 +98,7 @@
#define BLOCK_2M_VADDR_MASK BIT_MASK(20, 0)

#define TTBR_MASK BIT_MASK(47, PADDR_OFF)
+#define VTTBR_VMID_SHIFT 48

#define HTCR_RES1 ((1 << 31) | (1 << 23))
#define VTCR_RES1 ((1 << 31))
diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index e33550f..85ff33e 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -52,6 +52,39 @@
#define SCTLR_AFE_BIT (1 << 29)
#define SCTLR_TE_BIT (1 << 30)

+#define HCR_TRVM_BIT (1 << 30)
+#define HCR_TVM_BIT (1 << 26)
+#define HCR_HDC_BIT (1 << 29)
+#define HCR_TGE_BIT (1 << 27)
+#define HCR_TTLB_BIT (1 << 25)
+#define HCR_TPU_BIT (1 << 24)
+#define HCR_TPC_BIT (1 << 23)
+#define HCR_TSW_BIT (1 << 22)
+#define HCR_TAC_BIT (1 << 21)
+#define HCR_TIDCP_BIT (1 << 20)
+#define HCR_TSC_BIT (1 << 19)
+#define HCR_TID3_BIT (1 << 18)
+#define HCR_TID2_BIT (1 << 17)
+#define HCR_TID1_BIT (1 << 16)
+#define HCR_TID0_BIT (1 << 15)
+#define HCR_TWE_BIT (1 << 14)
+#define HCR_TWI_BIT (1 << 13)
+#define HCR_DC_BIT (1 << 12)
+#define HCR_BSU_BITS (3 << 10)
+#define HCR_BSU_INNER (1 << 10)
+#define HCR_BSU_OUTER (2 << 10)
+#define HCR_BSU_FULL HCR_BSU_BITS
+#define HCR_FB_BIT (1 << 9)
+#define HCR_VA_BIT (1 << 8)
+#define HCR_VI_BIT (1 << 7)
+#define HCR_VF_BIT (1 << 6)
+#define HCR_AMO_BIT (1 << 5)
+#define HCR_IMO_BIT (1 << 4)
+#define HCR_FMO_BIT (1 << 3)
+#define HCR_PTW_BIT (1 << 2)
+#define HCR_SWIO_BIT (1 << 1)
+#define HCR_VM_BIT (1 << 0)
+
#define PAR_F_BIT 0x1
#define PAR_FST_SHIFT 1
#define PAR_FST_MASK 0x3f
diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index 261d934..ea7bc7a 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -40,6 +40,8 @@
#define PAR_EL1 SYSREG_64(0, c7)

/* AArch32-specific registers */
+#define HCR SYSREG_32(4, c1, c1, 0)
+#define HCR2 SYSREG_32(4, c1, c1, 4)
#define HMAIR0 SYSREG_32(4, c10, c2, 0)
#define HMAIR1 SYSREG_32(4, c10, c2, 1)
#define HVBAR SYSREG_32(4, c12, c0, 0)
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
new file mode 100644
index 0000000..fcd977a
--- /dev/null
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -0,0 +1,101 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/control.h>
+#include <asm/sysregs.h>
+#include <jailhouse/control.h>
+#include <jailhouse/paging.h>
+#include <jailhouse/printk.h>
+
+int arch_map_memory_region(struct cell *cell,
+ const struct jailhouse_memory *mem)
+{
+ u64 phys_start = mem->phys_start;
+ u32 flags = PTE_FLAG_VALID | PTE_ACCESS_FLAG;
+
+ if (mem->flags & JAILHOUSE_MEM_READ)
+ flags |= S2_PTE_ACCESS_RO;
+ if (mem->flags & JAILHOUSE_MEM_WRITE)
+ flags |= S2_PTE_ACCESS_WO;
+ /*
+ * `DMA' may be a bit misleading here: it is used to define MMIO regions
+ */
+ if (mem->flags & JAILHOUSE_MEM_DMA)
+ flags |= S2_PTE_FLAG_DEVICE;
+ else
+ flags |= S2_PTE_FLAG_NORMAL;
+ if (mem->flags & JAILHOUSE_MEM_COMM_REGION)
+ phys_start = page_map_hvirt2phys(&cell->comm_page);
+ /*
+ if (!(mem->flags & JAILHOUSE_MEM_EXECUTE))
+ flags |= S2_PAGE_ACCESS_XN;
+ */
+
+ return page_map_create(&cell->arch.mm, phys_start, mem->size,
+ mem->virt_start, flags, PAGE_MAP_NON_COHERENT);
+}
+
+int arch_unmap_memory_region(struct cell *cell,
+ const struct jailhouse_memory *mem)
+{
+ return page_map_destroy(&cell->arch.mm, mem->virt_start, mem->size,
+ PAGE_MAP_NON_COHERENT);
+}
+
+unsigned long arch_page_map_gphys2phys(struct per_cpu *cpu_data,
+ unsigned long gphys)
+{
+ /* Translate IPA->PA */
+ return page_map_virt2phys(&cpu_data->cell->arch.mm, gphys);
+}
+
+int arch_mmu_cell_init(struct cell *cell)
+{
+ cell->arch.mm.root_paging = hv_paging;
+ cell->arch.mm.root_table = page_alloc(&mem_pool, 1);
+ if (!cell->arch.mm.root_table)
+ return -ENOMEM;
+
+ return 0;
+}
+
+int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)
+{
+ struct cell *cell = cpu_data->cell;
+ unsigned long cell_table = page_map_hvirt2phys(cell->arch.mm.root_table);
+ u64 vttbr = 0;
+ u32 vtcr = T0SZ
+ | SL0 << TCR_SL0_SHIFT
+ | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT)
+ | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT)
+ | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)
+ | VTCR_RES1;
+
+ if (cell->id > 0xff) {
+ panic_printk("No cell ID available\n");
+ return -E2BIG;
+ }
+ vttbr |= (u64)cell->id << VTTBR_VMID_SHIFT;
+ vttbr |= (u64)(cell_table & TTBR_MASK);
+
+ arm_write_sysreg(VTTBR_EL2, vttbr);
+ arm_write_sysreg(VTCR_EL2, vtcr);
+
+ isb();
+ /*
+ * Invalidate all stage-1 and 2 TLB entries for the current VMID
+ * ERET will ensure completion of these ops
+ */
+ arm_write_sysreg(TLBIALL, 1);
+
+ return 0;
+}
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index f895b16..881e196 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -10,22 +10,33 @@
* the COPYING file in the top-level directory.
*/

+#include <asm/control.h>
#include <asm/percpu.h>
#include <asm/platform.h>
#include <asm/setup.h>
#include <asm/sysregs.h>
+#include <jailhouse/control.h>
#include <jailhouse/entry.h>
#include <jailhouse/paging.h>
#include <jailhouse/string.h>

int arch_init_early(void)
{
- return arch_map_device(UART_BASE_PHYS, UART_BASE_VIRT, PAGE_SIZE);
+ int err = 0;
+
+ err = arch_mmu_cell_init(&root_cell);
+ if (err)
+ return err;
+
+ err = arch_map_device(UART_BASE_PHYS, UART_BASE_VIRT, PAGE_SIZE);
+
+ return err;
}

int arch_cpu_init(struct per_cpu *cpu_data)
{
int err = 0;
+ unsigned long hcr = HCR_VM_BIT;

/*
* Copy the registers to restore from the linux stack here, because we
@@ -35,6 +46,8 @@ int arch_cpu_init(struct per_cpu *cpu_data)
* sizeof(unsigned long));

err = switch_exception_level(cpu_data);
+ if (err)
+ return err;

/*
* Save pointer in the thread local storage
@@ -43,6 +56,11 @@ int arch_cpu_init(struct per_cpu *cpu_data)
*/
arm_write_sysreg(TPIDR_EL2, cpu_data);

+ /* Setup guest traps */
+ arm_write_sysreg(HCR, hcr);
+
+ err = arch_mmu_cpu_cell_init(cpu_data);
+
return err;
}

@@ -75,18 +93,9 @@ void arch_park_cpu(unsigned int cpu_id) {}
void arch_shutdown_cpu(unsigned int cpu_id) {}
int arch_cell_create(struct per_cpu *cpu_data, struct cell *new_cell)
{ return -ENOSYS; }
-int arch_map_memory_region(struct cell *cell,
- const struct jailhouse_memory *mem)
-{ return -ENOSYS; }
-int arch_unmap_memory_region(struct cell *cell,
- const struct jailhouse_memory *mem)
-{ return -ENOSYS; }
void arch_cell_destroy(struct per_cpu *cpu_data, struct cell *new_cell) {}
void arch_config_commit(struct per_cpu *cpu_data,
struct cell *cell_added_removed) {}
void arch_shutdown(void) {}
-unsigned long arch_page_map_gphys2phys(struct per_cpu *cpu_data,
- unsigned long gphys)
-{ return INVALID_PHYS_ADDR; }
void arch_panic_stop(struct per_cpu *cpu_data) {__builtin_unreachable();}
void arch_panic_halt(struct per_cpu *cpu_data) {}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:46 UTC
Permalink
To avoid using inline asm directly in the setup and control code, to
provide clear names for system registers, and to allow easier
refactoring for a future arm64 port, this patch introduces some useful
macros for accessing the core system registers.

For a 64bit port, a couple of wrappers would still need to be added to
modify system registers that are of different sizes on the two
architectures (e.g. HCR)

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/include/asm/sysregs.h | 68 +++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 hypervisor/arch/arm/include/asm/sysregs.h

diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
new file mode 100644
index 0000000..8be5ce1
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -0,0 +1,68 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_SYSREGS_H
+#define _JAILHOUSE_ASM_SYSREGS_H
+
+/*
+ * Along with some system register names, this header defines the following
+ * macros for accessing cp15 registers.
+ *
+ * C-side:
+ * - arm_write_sysreg(SYSREG_NAME, var)
+ * - arm_read_sysreg(SYSREG_NAME, var)
+ * asm-side:
+ * - arm_write_sysreg(SYSREG_NAME, reg)
+ * - arm_read_sysreg(SYSREG_NAME, reg)
+ */
+
+
+#define SYSREG_32(...) 32, __VA_ARGS__
+#define SYSREG_64(...) 64, __VA_ARGS__
+
+#define _arm_write_sysreg(size, ...) arm_write_sysreg_ ## size(__VA_ARGS__)
+#define arm_write_sysreg(...) _arm_write_sysreg(__VA_ARGS__)
+
+#define _arm_read_sysreg(size, ...) arm_read_sysreg_ ## size(__VA_ARGS__)
+#define arm_read_sysreg(...) _arm_read_sysreg(__VA_ARGS__)
+
+#ifndef __ASSEMBLY__
+
+#define arm_write_sysreg_32(op1, crn, crm, op2, val) \
+ asm volatile ("mcr p15, "#op1", %0, "#crn", "#crm", "#op2"\n" \
+ : : "r"((u32)(val)))
+#define arm_write_sysreg_64(op1, crm, val) \
+ asm volatile ("mcrr p15, "#op1", %Q0, %R0, "#crm"\n" \
+ : : "r"((u64)(val)))
+
+#define arm_read_sysreg_32(op1, crn, crm, op2, val) \
+ asm volatile ("mrc p15, "#op1", %0, "#crn", "#crm", "#op2"\n" \
+ : "=r"((u32)(val)))
+#define arm_read_sysreg_64(op1, crm, val) \
+ asm volatile ("mrrc p15, "#op1", %Q0, %R0, "#crm"\n" \
+ : "=r"((u64)(val)))
+
+#else /* __ASSEMBLY__ */
+
+#define arm_write_sysreg_32(op1, crn, crm, op2, reg) \
+ mcr p15, op1, reg, crn, crm, op2
+#define arm_write_sysreg_64(op1, crm, reg1, reg2) \
+ mcrr p15, op1, reg1, reg2, crm
+
+#define arm_read_sysreg_32(op1, crn, crm, op2, reg) \
+ mrc p15, op1, reg, crn, crm, op2
+#define arm_read_sysreg_64(op1, crm, reg1, reg2) \
+ mrrc p15, op1, reg1, reg2, crm
+
+#endif /* __ASSEMBLY__ */
+
+#endif
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:50 UTC
Permalink
Instead of attempting to reinvent the wheel, this patch copies the
ticket spinlock implementation from Linux, that will allow for optimal
locking between heterogeneous cores.
It contains a few subtleties, such as a preload instruction that ensures
that the cache line is immediately loaded in exclusive state.

Big endian is not supported for the moment, but as soon as we have a
macro that declares the use of this mode, the change will be trivial
here.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/include/asm/spinlock.h | 61 ++++++++++++++++++++++++----
1 file changed, 54 insertions(+), 7 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/spinlock.h b/hypervisor/arch/arm/include/asm/spinlock.h
index d5cb68c..5e32a59 100644
--- a/hypervisor/arch/arm/include/asm/spinlock.h
+++ b/hypervisor/arch/arm/include/asm/spinlock.h
@@ -8,25 +8,72 @@
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
+ *
+ * Copied from arch/arm/include/asm/spinlock.h in Linux
*/
+#ifndef _JAILHOUSE_ASM_SPINLOCK_H
+#define _JAILHOUSE_ASM_SPINLOCK_H

#include <asm/bitops.h>
#include <asm/processor.h>

-typedef struct {
- unsigned long state;
-} spinlock_t;
+#ifndef __ASSEMBLY__

#define DEFINE_SPINLOCK(name) spinlock_t (name)
+#define TICKET_SHIFT 16
+
+typedef struct {
+ union {
+ u32 slock;
+ struct __raw_tickets {
+ u16 owner;
+ u16 next;
+ } tickets;
+ };
+} spinlock_t;

static inline void spin_lock(spinlock_t *lock)
{
-// while (test_and_set_bit(0, &lock->state))
-// cpu_relax();
+ unsigned long tmp;
+ u32 newval;
+ spinlock_t lockval;
+
+ /* Take the lock by updating the high part atomically */
+ asm volatile (
+" .arch_extension mp\n"
+" pldw [%3]\n"
+"1: ldrex %0, [%3]\n"
+" add %1, %0, %4\n"
+" strex %2, %1, [%3]\n"
+" teq %2, #0\n"
+" bne 1b"
+ : "=&r" (lockval), "=&r" (newval), "=&r" (tmp)
+ : "r" (&lock->slock), "I" (1 << TICKET_SHIFT)
+ : "cc");
+
+ while (lockval.tickets.next != lockval.tickets.owner)
+ asm volatile (
+ "wfe\n"
+ "ldrh %0, [%1]\n"
+ : "=r" (lockval.tickets.owner)
+ : "r" (&lock->tickets.owner));
+
+ /* Ensure we have the lock before doing any more memory ops */
+ dmb(ish);
}

static inline void spin_unlock(spinlock_t *lock)
{
-// asm volatile("": : :"memory");
-// clear_bit(0, &lock->state);
+ /* Ensure all memory ops are finished before releasing the lock */
+ dmb(ish);
+
+ /* No need for an exclusive, since only one CPU can unlock at a time. */
+ lock->tickets.owner++;
+
+ /* Ensure the spinlock is updated before notifying other CPUs */
+ dsb(ishst);
+ sev();
}
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_SPINLOCK_H */
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:53 UTC
Permalink
The EL2 installation is done in two times:
- First, an HVC is issued to jump into the kernel stub and install the
bootstrap vectors.
- Then, a second HVC allows the setup code to switch to physical address
space.
Execution continues at EL2. Once the whole initialisation is done, the
final vectors are installed, and arch_cpu_activate_vmm will do an ERET
to jump back to the kernel, which is now a guest.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 3 +-
hypervisor/arch/arm/entry.S | 21 ++++++++++
hypervisor/arch/arm/exception.S | 26 ++++++++++++
hypervisor/arch/arm/include/asm/percpu.h | 3 +-
hypervisor/arch/arm/include/asm/processor.h | 17 ++++++++
hypervisor/arch/arm/include/asm/setup.h | 56 ++++++++++++++++++++++++++
hypervisor/arch/arm/include/asm/setup_mmu.h | 58 +++++++++++++++++++++++++++
hypervisor/arch/arm/include/asm/sysregs.h | 3 ++
hypervisor/arch/arm/mmu_hyp.c | 44 ++++++++++++++++++++
hypervisor/arch/arm/setup.c | 28 ++++++++++++-
10 files changed, 254 insertions(+), 5 deletions(-)
create mode 100644 hypervisor/arch/arm/exception.S
create mode 100644 hypervisor/arch/arm/include/asm/setup.h
create mode 100644 hypervisor/arch/arm/include/asm/setup_mmu.h
create mode 100644 hypervisor/arch/arm/mmu_hyp.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index bb9203c..41e3394 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -14,7 +14,8 @@ KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))

always := built-in.o

-obj-y := entry.o dbg-write.o setup.o lib.o
+obj-y := entry.o dbg-write.o exception.o setup.o lib.o
+obj-y += mmu_hyp.o
obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o

# Needed for kconfig
diff --git a/hypervisor/arch/arm/entry.S b/hypervisor/arch/arm/entry.S
index a910f13..2dd1a9a 100644
--- a/hypervisor/arch/arm/entry.S
+++ b/hypervisor/arch/arm/entry.S
@@ -40,3 +40,24 @@ arch_entry:
add sp, #PERCPU_STACK_END
/* Call entry(cpuid, struct per_cpu*) */
b entry
+
+ .globl bootstrap_vectors
+ .align 5
+bootstrap_vectors:
+ b .
+ b .
+ b .
+ b .
+ b .
+ b setup_el2
+ b .
+ b .
+
+setup_el2:
+ /*
+ * Load the physical values of lr and sp, and continue execution at EL2.
+ */
+ mov lr, r0
+ mov sp, r1
+
+ bx lr
diff --git a/hypervisor/arch/arm/exception.S b/hypervisor/arch/arm/exception.S
new file mode 100644
index 0000000..8c483aa
--- /dev/null
+++ b/hypervisor/arch/arm/exception.S
@@ -0,0 +1,26 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/head.h>
+
+ .text
+ .globl hyp_vectors
+ .align 5
+hyp_vectors:
+ b .
+ b .
+ b .
+ b .
+ b .
+ b .
+ b .
+ b .
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index 1121ee8..b361116 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -33,6 +33,7 @@ struct per_cpu {
unsigned long linux_sp;
unsigned long linux_ret;
unsigned long linux_flags;
+ unsigned long linux_reg[NUM_ENTRY_REGS];

unsigned int cpu_id;
// u32 apic_id;
@@ -40,8 +41,6 @@ struct per_cpu {

u32 stats[JAILHOUSE_NUM_CPU_STATS];

- unsigned long linux_reg[NUM_ENTRY_REGS];
-// unsigned long linux_ip;
bool initialized;

volatile bool stop_cpu;
diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 25bab65..61ff3f2 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -13,6 +13,23 @@
#ifndef _JAILHOUSE_ASM_PROCESSOR_H
#define _JAILHOUSE_ASM_PROCESSOR_H

+#define PSR_MODE_MASK 0xf
+#define PSR_USR_MODE 0x0
+#define PSR_FIQ_MODE 0x1
+#define PSR_IRQ_MODE 0x2
+#define PSR_SVC_MODE 0x3
+#define PSR_MON_MODE 0x6
+#define PSR_ABT_MODE 0x7
+#define PSR_HYP_MODE 0xa
+#define PSR_UND_MODE 0xb
+#define PSR_SYS_MODE 0xf
+
+#define PSR_32_BIT (1 << 4)
+#define PSR_T_BIT (1 << 5)
+#define PSR_F_BIT (1 << 6)
+#define PSR_I_BIT (1 << 7)
+#define PSR_A_BIT (1 << 8)
+
#define MPIDR_CPUID_MASK 0x00ffffff

#ifndef __ASSEMBLY__
diff --git a/hypervisor/arch/arm/include/asm/setup.h b/hypervisor/arch/arm/include/asm/setup.h
new file mode 100644
index 0000000..e73214c
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/setup.h
@@ -0,0 +1,56 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_SETUP_H
+#define _JAILHOUSE_ASM_SETUP_H
+
+#include <asm/head.h>
+#include <asm/percpu.h>
+
+#ifndef __ASSEMBLY__
+
+static inline void __attribute__((always_inline))
+cpu_return_el1(struct per_cpu *cpu_data)
+{
+ /* Return value */
+ cpu_data->linux_reg[0] = 0;
+
+ asm volatile(
+ /* Reset the hypervisor stack */
+ "mov sp, %4\n"
+
+ "msr sp_svc, %0\n"
+ "msr elr_hyp, %1\n"
+ "msr spsr_hyp, %2\n"
+ /*
+ * We don't care about clobbering the other registers from now on. Must
+ * be in sync with arch_entry.
+ */
+ "ldm %3, {r0 - r12}\n"
+ /* After this, the kernel won't be able to access the hypervisor code */
+ "eret\n"
+ :
+ : "r" (cpu_data->linux_sp + (NUM_ENTRY_REGS * sizeof(unsigned long))),
+ "r" (cpu_data->linux_ret),
+ "r" (cpu_data->linux_flags),
+ "r" (cpu_data->linux_reg),
+ "r" (cpu_data->stack + PERCPU_STACK_END)
+ :);
+}
+
+int switch_exception_level(struct per_cpu *cpu_data);
+inline int arch_map_device(unsigned long paddr, unsigned long vaddr,
+ unsigned long size);
+inline int arch_unmap_device(unsigned long addr, unsigned long size);
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_SETUP_H */
diff --git a/hypervisor/arch/arm/include/asm/setup_mmu.h b/hypervisor/arch/arm/include/asm/setup_mmu.h
new file mode 100644
index 0000000..758f516
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/setup_mmu.h
@@ -0,0 +1,58 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_SETUP_MMU_H
+#define _JAILHOUSE_ASM_SETUP_MMU_H
+
+#include <asm/head.h>
+#include <asm/percpu.h>
+
+#ifndef __ASSEMBLY__
+
+/* Procedures used to translate addresses during the MMU setup process */
+typedef void* (*phys2virt_t)(unsigned long);
+typedef unsigned long (*virt2phys_t)(volatile const void *);
+
+static void __attribute__((naked)) __attribute__((noinline))
+cpu_switch_el2(unsigned long phys_bootstrap, virt2phys_t virt2phys)
+{
+ asm volatile(
+ /*
+ * The linux hyp stub allows to install the vectors with a single hvc.
+ * The vector base address is in r0 (phys_bootstrap).
+ */
+ "hvc #0\n"
+
+ /*
+ * Now that the bootstrap vectors are installed, call setup_el2 with
+ * the translated physical values of lr and sp as arguments
+ */
+ "mov r0, sp\n"
+ "push {lr}\n"
+ "blx %0\n"
+ "pop {lr}\n"
+ "push {r0}\n"
+ "mov r0, lr\n"
+ "blx %0\n"
+ "pop {r1}\n"
+ "hvc #0\n"
+ :
+ : "r" (virt2phys)
+ /*
+ * The call to virt2phys may clobber all temp registers. This list
+ * ensures that the compiler uses a decent register for hvirt2phys.
+ */
+ : "cc", "memory", "r0", "r1", "r2", "r3");
+}
+
+#endif /* !__ASSEMBLY__ */
+#endif /* _JAILHOUSE_ASM_SETUP_MMU_H */
diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index b5dddcf..b27375f 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -30,6 +30,9 @@
* (Use the AArch64 names to ease the compatibility work)
*/
#define MPIDR_EL1 SYSREG_32(0, c0, c0, 5)
+#define TPIDR_EL2 SYSREG_32(4, c13, c0, 2)
+
+#define HVBAR SYSREG_32(4, c12, c0, 0)

#define SYSREG_32(...) 32, __VA_ARGS__
#define SYSREG_64(...) 64, __VA_ARGS__
diff --git a/hypervisor/arch/arm/mmu_hyp.c b/hypervisor/arch/arm/mmu_hyp.c
new file mode 100644
index 0000000..c756576
--- /dev/null
+++ b/hypervisor/arch/arm/mmu_hyp.c
@@ -0,0 +1,44 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/setup.h>
+#include <asm/setup_mmu.h>
+#include <asm/sysregs.h>
+#include <jailhouse/paging.h>
+
+/*
+ * Jumping to EL2 in the same C code represents an interesting challenge, since
+ * it will switch from virtual addresses to physical ones, and then back to
+ * virtual after setting up the EL2 MMU.
+ */
+int switch_exception_level(struct per_cpu *cpu_data)
+{
+ extern unsigned long bootstrap_vectors;
+ extern unsigned long hyp_vectors;
+
+ /* Save the virtual address of the phys2virt function for later */
+ phys2virt_t phys2virt = page_map_phys2hvirt;
+ virt2phys_t virt2phys = page_map_hvirt2phys;
+ unsigned long phys_bootstrap = virt2phys(&bootstrap_vectors);
+
+ cpu_switch_el2(phys_bootstrap, virt2phys);
+ /*
+ * At this point, we are at EL2, and we work with physical addresses.
+ * The MMU needs to be initialised and execution must go back to virtual
+ * addresses before returning, or else we are pretty much doomed.
+ */
+
+ /* Set the new vectors once we're back to a sane, virtual state */
+ arm_write_sysreg(HVBAR, &hyp_vectors);
+
+ return 0;
+}
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 99dc79c..599ad39 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -10,7 +10,11 @@
* the COPYING file in the top-level directory.
*/

+#include <asm/setup.h>
+#include <asm/sysregs.h>
#include <jailhouse/entry.h>
+#include <jailhouse/paging.h>
+#include <jailhouse/string.h>

int arch_init_early(void)
{
@@ -19,7 +23,25 @@ int arch_init_early(void)

int arch_cpu_init(struct per_cpu *cpu_data)
{
- return -ENOSYS;
+ int err = 0;
+
+ /*
+ * Copy the registers to restore from the linux stack here, because we
+ * won't be able to access it later
+ */
+ memcpy(&cpu_data->linux_reg, (void *)cpu_data->linux_sp, NUM_ENTRY_REGS
+ * sizeof(unsigned long));
+
+ err = switch_exception_level(cpu_data);
+
+ /*
+ * Save pointer in the thread local storage
+ * Must be done early in order to handle aborts and errors in the setup
+ * code.
+ */
+ arm_write_sysreg(TPIDR_EL2, cpu_data);
+
+ return err;
}

int arch_init_late(void)
@@ -29,6 +51,9 @@ int arch_init_late(void)

void arch_cpu_activate_vmm(struct per_cpu *cpu_data)
{
+ /* Return to the kernel */
+ cpu_return_el1(cpu_data);
+
while (1);
}

@@ -41,7 +66,6 @@ void arch_cpu_restore(struct per_cpu *cpu_data)
#include <jailhouse/processor.h>
#include <jailhouse/control.h>
#include <jailhouse/string.h>
-#include <jailhouse/paging.h>
void arch_suspend_cpu(unsigned int cpu_id) {}
void arch_resume_cpu(unsigned int cpu_id) {}
void arch_reset_cpu(unsigned int cpu_id) {}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:58 UTC
Permalink
Verify that virtualization is actually supported before going any further
in the initialisation process.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/include/asm/processor.h | 2 ++
hypervisor/arch/arm/include/asm/sysregs.h | 2 ++
hypervisor/arch/arm/setup.c | 14 ++++++++++++++
3 files changed, 18 insertions(+)

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 85ff33e..7835fc4 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -34,6 +34,8 @@

#define MPIDR_CPUID_MASK 0x00ffffff

+#define PFR1_VIRT(pfr) ((pfr) >> 12 & 0xf)
+
#define SCTLR_M_BIT (1 << 0)
#define SCTLR_A_BIT (1 << 1)
#define SCTLR_C_BIT (1 << 2)
diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index ea7bc7a..1679734 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -30,6 +30,8 @@
* (Use the AArch64 names to ease the compatibility work)
*/
#define MPIDR_EL1 SYSREG_32(0, c0, c0, 5)
+#define ID_PFR0_EL1 SYSREG_32(0, c0, c1, 0)
+#define ID_PFR1_EL1 SYSREG_32(0, c0, c1, 1)
#define SCTLR_EL2 SYSREG_32(4, c1, c0, 0)
#define TPIDR_EL2 SYSREG_32(4, c13, c0, 2)
#define TTBR0_EL2 SYSREG_64(4, c2)
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 881e196..08761d3 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -20,10 +20,24 @@
#include <jailhouse/paging.h>
#include <jailhouse/string.h>

+static int arch_check_features(void)
+{
+ u32 pfr1;
+ arm_read_sysreg(ID_PFR1_EL1, pfr1);
+
+ if (!PFR1_VIRT(pfr1))
+ return -ENODEV;
+
+ return 0;
+}
+
int arch_init_early(void)
{
int err = 0;

+ if ((err = arch_check_features()) != 0)
+ return err;
+
err = arch_mmu_cell_init(&root_cell);
if (err)
return err;
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:04 UTC
Permalink
This patch introduces a pending_irq structure to provide a level of
abstraction, in order to store the interrupts waiting to be injected in
the cell. They are allocated as a static array of 256 IRQs for each CPU,
which should be more than enough. Insertion finds the first available slot
and builds a linked list of pending vIRQs.

Two cases justify the need for this structure:
- The GIC has a limited number of list registers for injecting virtual
interrupts. Once they are full, software must store the pending ones
itself, and use the GIC's maintenance IRQ to be informed when they are
available again.
In jailhouse, this case should be very rare since IRQs are directly
injected, but it must be taken into account nonetheless.
- IPIs sent by a core need to be stored somewhere to let the other CPUs
inject them into their own list registers.

The GIC backend will need to call irqchip_inject_pending when receiving
a maintenance IRQ or a synchronisation SGI in order to clean the list.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/gic-v3.c | 6 ++
hypervisor/arch/arm/include/asm/irqchip.h | 34 ++++++++
hypervisor/arch/arm/include/asm/percpu.h | 8 ++
hypervisor/arch/arm/irqchip.c | 126 +++++++++++++++++++++++++++++
4 files changed, 174 insertions(+)

diff --git a/hypervisor/arch/arm/gic-v3.c b/hypervisor/arch/arm/gic-v3.c
index b8ffaa8..b0c5dac 100644
--- a/hypervisor/arch/arm/gic-v3.c
+++ b/hypervisor/arch/arm/gic-v3.c
@@ -151,9 +151,15 @@ static void gic_handle_irq(struct per_cpu *cpu_data)
{
}

+static int gic_inject_irq(struct per_cpu *cpu_data, struct pending_irq *irq)
+{
+ return 0;
+}
+
struct irqchip_ops gic_irqchip = {
.init = gic_init,
.cpu_init = gic_cpu_init,
.send_sgi = gic_send_sgi,
.handle_irq = gic_handle_irq,
+ .inject_irq = gic_inject_irq,
};
diff --git a/hypervisor/arch/arm/include/asm/irqchip.h b/hypervisor/arch/arm/include/asm/irqchip.h
index 0ef5fe0..3fa37fd 100644
--- a/hypervisor/arch/arm/include/asm/irqchip.h
+++ b/hypervisor/arch/arm/include/asm/irqchip.h
@@ -13,6 +13,14 @@
#ifndef _JAILHOUSE_ASM_IRQCHIP_H
#define _JAILHOUSE_ASM_IRQCHIP_H

+/*
+ * Since there is no finer-grained allocation than page-alloc for the moment,
+ * and it is very complicated to predict the total size needed at
+ * initialisation, each cpu is allocated one page of pending irqs.
+ * This allows for 256 pending IRQs, which should be sufficient.
+ */
+#define MAX_PENDING_IRQS (PAGE_SIZE / sizeof(struct pending_irq))
+
#include <asm/percpu.h>

#ifndef __ASSEMBLY__
@@ -40,13 +48,39 @@ struct irqchip_ops {

int (*send_sgi)(struct sgi *sgi);
void (*handle_irq)(struct per_cpu *cpu_data);
+ int (*inject_irq)(struct per_cpu *cpu_data, struct pending_irq *irq);
};

+/* Virtual interrupts waiting to be injected */
+struct pending_irq {
+ u32 virt_id;
+
+ u8 priority;
+ u8 hw;
+ union {
+ /* Physical id, when hw is 1 */
+ u16 irq;
+ struct {
+ /* GICv2 needs cpuid for SGIs */
+ u16 cpuid : 15;
+ /* EOI generates a maintenance irq */
+ u16 maintenance : 1;
+ } sgi __attribute__((packed));
+ } type;
+
+ struct pending_irq *next;
+ struct pending_irq *prev;
+} __attribute__((packed));
+
int irqchip_init(void);
int irqchip_cpu_init(struct per_cpu *cpu_data);

int irqchip_send_sgi(struct sgi *sgi);
void irqchip_handle_irq(struct per_cpu *cpu_data);

+int irqchip_inject_pending(struct per_cpu *cpu_data);
+int irqchip_insert_pending(struct per_cpu *cpu_data, struct pending_irq *irq);
+int irqchip_remove_pending(struct per_cpu *cpu_data, struct pending_irq *irq);
+
#endif /* __ASSEMBLY__ */
#endif /* _JAILHOUSE_ASM_IRQCHIP_H */
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index e224254..16750c0 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -26,6 +26,9 @@
#ifndef __ASSEMBLY__

#include <asm/cell.h>
+#include <asm/spinlock.h>
+
+struct pending_irq;

struct per_cpu {
/* Keep these two in sync with defines above! */
@@ -36,6 +39,11 @@ struct per_cpu {
unsigned long linux_reg[NUM_ENTRY_REGS];

unsigned int cpu_id;
+
+ /* Other CPUs can insert sgis into the pending array */
+ spinlock_t gic_lock;
+ struct pending_irq *pending_irqs;
+ struct pending_irq *first_pending;
/* Only GICv3: redistributor base */
void *gicr_base;

diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
index 8fb4415..75acdd7 100644
--- a/hypervisor/arch/arm/irqchip.c
+++ b/hypervisor/arch/arm/irqchip.c
@@ -31,6 +31,126 @@ unsigned long gicd_size;
static bool irqchip_is_init;
static struct irqchip_ops irqchip;

+static int irqchip_init_pending(struct per_cpu *cpu_data)
+{
+ struct pending_irq *pend_array = page_alloc(&mem_pool, 1);
+
+ if (pend_array == NULL)
+ return -ENOMEM;
+ memset(pend_array, 0, PAGE_SIZE);
+
+ cpu_data->pending_irqs = pend_array;
+ cpu_data->first_pending = NULL;
+
+ return 0;
+}
+
+/*
+ * Find the first available pending struct for insertion. The `prev' pointer is
+ * set to the previous pending interrupt, if any, to help inserting the new one
+ * into the list.
+ * Returns NULL when no slot is available
+ */
+static struct pending_irq* get_pending_slot(struct per_cpu *cpu_data,
+ struct pending_irq **prev)
+{
+ u32 i, pending_idx;
+ struct pending_irq *pending = cpu_data->first_pending;
+
+ *prev = NULL;
+
+ for (i = 0; i < MAX_PENDING_IRQS; i++) {
+ pending_idx = pending - cpu_data->pending_irqs;
+ if (pending == NULL || i < pending_idx)
+ return cpu_data->pending_irqs + i;
+
+ *prev = pending;
+ pending = pending->next;
+ }
+
+ return NULL;
+}
+
+int irqchip_insert_pending(struct per_cpu *cpu_data, struct pending_irq *irq)
+{
+ struct pending_irq *prev = NULL;
+ struct pending_irq *slot;
+
+ spin_lock(&cpu_data->gic_lock);
+
+ slot = get_pending_slot(cpu_data, &prev);
+ if (slot == NULL) {
+ spin_unlock(&cpu_data->gic_lock);
+ return -ENOMEM;
+ }
+
+ /*
+ * Don't override the pointers yet, they may be read by the injection
+ * loop. Odds are astronomically low, but hey.
+ */
+ memcpy(slot, irq, sizeof(struct pending_irq) - 2 * sizeof(void *));
+ slot->prev = prev;
+ if (prev) {
+ slot->next = prev->next;
+ prev->next = slot;
+ } else {
+ slot->next = cpu_data->first_pending;
+ cpu_data->first_pending = slot;
+ }
+ if (slot->next)
+ slot->next->prev = slot;
+
+ spin_unlock(&cpu_data->gic_lock);
+
+ return 0;
+}
+
+/*
+ * Only executed by `irqchip_inject_pending' on a CPU to inject its own stuff.
+ */
+int irqchip_remove_pending(struct per_cpu *cpu_data, struct pending_irq *irq)
+{
+ spin_lock(&cpu_data->gic_lock);
+
+ if (cpu_data->first_pending == irq)
+ cpu_data->first_pending = irq->next;
+ if (irq->prev)
+ irq->prev->next = irq->next;
+ if (irq->next)
+ irq->next->prev = irq->prev;
+
+ spin_unlock(&cpu_data->gic_lock);
+
+ return 0;
+}
+
+int irqchip_inject_pending(struct per_cpu *cpu_data)
+{
+ int err;
+ struct pending_irq *pending = cpu_data->first_pending;
+
+ while (pending != NULL) {
+ err = irqchip.inject_irq(cpu_data, pending);
+ if (err == -EBUSY)
+ /* The list registers are full. */
+ break;
+ else
+ /*
+ * Removal only changes the pointers, but does not
+ * deallocate anything.
+ * Concurrent accesses are avoided with the spinlock,
+ * but the `next' pointer of the current pending object
+ * may be rewritten by an external insert before or
+ * after this removal, which isn't an issue.
+ */
+ irqchip_remove_pending(cpu_data, pending);
+
+ pending = pending->next;
+ }
+
+ return 0;
+}
+
void irqchip_handle_irq(struct per_cpu *cpu_data)
{
irqchip.handle_irq(cpu_data);
@@ -43,6 +163,12 @@ int irqchip_send_sgi(struct sgi *sgi)

int irqchip_cpu_init(struct per_cpu *cpu_data)
{
+ int err;
+
+ err = irqchip_init_pending(cpu_data);
+ if (err)
+ return err;
+
if (irqchip.cpu_init)
return irqchip.cpu_init(cpu_data);
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:45 UTC
Permalink
Each CPU saves its general-purpose registers on the stack, switches to
the hypervisor stack and saves the return context in the per-cpu datas.
After that, it jumps to the core entry which will do the necessary
initialisation to setup EL2.

Clusters are not supported yet: they will require the entry code to
fetch the total number of possible CPUs from the header, in order to
deduce an efficient base address for the per-cpu datas.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/entry.S | 27 +++++++++++++++++++++------
hypervisor/arch/arm/include/asm/percpu.h | 4 +++-
2 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/hypervisor/arch/arm/entry.S b/hypervisor/arch/arm/entry.S
index 25325a0..a910f13 100644
--- a/hypervisor/arch/arm/entry.S
+++ b/hypervisor/arch/arm/entry.S
@@ -17,11 +17,26 @@
.text
.globl arch_entry
arch_entry:
- mvn %r0,#~-38
- bx %lr
+ /* r0: cpuid */
+ push {r0 - r12}

+ ldr r1, =__page_pool
+ mov r2, #1
+ lsl r2, #PERCPU_SIZE_SHIFT
+ /*
+ * percpu data = pool + cpuid * shift
+ * TODO: handle aff1 and aff2
+ */
+ mla r1, r2, r0, r1
+ add r2, r1, #PERCPU_LINUX_SP

-/* Fix up Global Offset Table with absolute hypervisor address */
- .globl got_init
-got_init:
- bx %lr
+ /* Save SP, LR, CPSR */
+ str sp, [r2], #4
+ str lr, [r2], #4
+ mrs r3, cpsr
+ str r3, [r2]
+
+ mov sp, r1
+ add sp, #PERCPU_STACK_END
+ /* Call entry(cpuid, struct per_cpu*) */
+ b entry
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index c8aaf09..1121ee8 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -16,7 +16,7 @@
#include <asm/types.h>
#include <asm/paging.h>

-#define NUM_ENTRY_REGS 6
+#define NUM_ENTRY_REGS 13

/* Keep in sync with struct per_cpu! */
#define PERCPU_SIZE_SHIFT 13
@@ -31,6 +31,8 @@ struct per_cpu {
/* Keep these two in sync with defines above! */
u8 stack[PAGE_SIZE];
unsigned long linux_sp;
+ unsigned long linux_ret;
+ unsigned long linux_flags;

unsigned int cpu_id;
// u32 apic_id;
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:49 UTC
Permalink
Linux's arch/arm/include/io.h uses inline asm to make sure that the
compiler doesn't generate any register write-back, which are extremely
difficult to emulate.
It shouldn't be a requirement in Jailhouse, since arm doesn't currently
support nested hypervisors.
The `volatile' attribute should thus be sufficient to ensure that the
compiler doesn't optimize these away.

This patch still separates the IO accessors from the core, to facilitate
a possible replacement with inline assembly.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/include/asm/io.h | 63 ++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
create mode 100644 hypervisor/arch/arm/include/asm/io.h

diff --git a/hypervisor/arch/arm/include/asm/io.h b/hypervisor/arch/arm/include/asm/io.h
new file mode 100644
index 0000000..10705f5
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/io.h
@@ -0,0 +1,63 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_IO_H
+#define _JAILHOUSE_ASM_IO_H
+
+#include <asm/types.h>
+
+#ifndef __ASSEMBLY__
+
+static inline void writeb_relaxed(u8 val, volatile void *addr)
+{
+ *(volatile u8 *)addr = val;
+}
+
+static inline void writew_relaxed(u16 val, volatile void *addr)
+{
+ *(volatile u16 *)addr = val;
+}
+
+static inline void writel_relaxed(u32 val, volatile void *addr)
+{
+ *(volatile u32 *)addr = val;
+}
+
+static inline void writeq_relaxed(u64 val, volatile void *addr)
+{
+ /* Warning: no guarantee of atomicity */
+ *(volatile u64 *)addr = val;
+}
+
+static inline u8 readb_relaxed(volatile void *addr)
+{
+ return *(volatile u8 *)addr;
+}
+
+static inline u16 readw_relaxed(volatile void *addr)
+{
+ return *(volatile u16 *)addr;
+}
+
+static inline u32 readl_relaxed(volatile void *addr)
+{
+ return *(volatile u32 *)addr;
+}
+
+static inline u64 readq_relaxed(volatile void *addr)
+{
+ /* Warning: no guarantee of atomicity */
+ return *(volatile u64 *)addr;
+}
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_IO_H */
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-09-03 08:10:25 UTC
Permalink
Post by Jean-Philippe Brucker
Linux's arch/arm/include/io.h uses inline asm to make sure that the
compiler doesn't generate any register write-back, which are extremely
difficult to emulate.
It shouldn't be a requirement in Jailhouse, since arm doesn't currently
support nested hypervisors.
The `volatile' attribute should thus be sufficient to ensure that the
compiler doesn't optimize these away.
This patch still separates the IO accessors from the core, to facilitate
a possible replacement with inline assembly.
---
hypervisor/arch/arm/include/asm/io.h | 63 ++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
create mode 100644 hypervisor/arch/arm/include/asm/io.h
diff --git a/hypervisor/arch/arm/include/asm/io.h b/hypervisor/arch/arm/include/asm/io.h
new file mode 100644
index 0000000..10705f5
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/io.h
@@ -0,0 +1,63 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_IO_H
+#define _JAILHOUSE_ASM_IO_H
+
+#include <asm/types.h>
+
+#ifndef __ASSEMBLY__
+
+static inline void writeb_relaxed(u8 val, volatile void *addr)
+{
+ *(volatile u8 *)addr = val;
+}
+
+static inline void writew_relaxed(u16 val, volatile void *addr)
+{
+ *(volatile u16 *)addr = val;
+}
+
+static inline void writel_relaxed(u32 val, volatile void *addr)
+{
+ *(volatile u32 *)addr = val;
+}
+
+static inline void writeq_relaxed(u64 val, volatile void *addr)
+{
+ /* Warning: no guarantee of atomicity */
+ *(volatile u64 *)addr = val;
+}
+
+static inline u8 readb_relaxed(volatile void *addr)
+{
+ return *(volatile u8 *)addr;
+}
+
+static inline u16 readw_relaxed(volatile void *addr)
+{
+ return *(volatile u16 *)addr;
+}
+
+static inline u32 readl_relaxed(volatile void *addr)
+{
+ return *(volatile u32 *)addr;
+}
+
+static inline u64 readq_relaxed(volatile void *addr)
+{
+ /* Warning: no guarantee of atomicity */
+ return *(volatile u64 *)addr;
+}
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_IO_H */
These are identical to what we already have in jailhouse/mmio.h, no?

Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-10-06 10:10:01 UTC
Permalink
Hi Jan,
Post by Jan Kiszka
Post by Jean-Philippe Brucker
Linux's arch/arm/include/io.h uses inline asm to make sure that the
compiler doesn't generate any register write-back, which are extremely
difficult to emulate.
It shouldn't be a requirement in Jailhouse, since arm doesn't currently
support nested hypervisors.
The `volatile' attribute should thus be sufficient to ensure that the
compiler doesn't optimize these away.
This patch still separates the IO accessors from the core, to facilitate
a possible replacement with inline assembly.
---
hypervisor/arch/arm/include/asm/io.h | 63 ++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
create mode 100644 hypervisor/arch/arm/include/asm/io.h
diff --git a/hypervisor/arch/arm/include/asm/io.h b/hypervisor/arch/arm/include/asm/io.h
new file mode 100644
index 0000000..10705f5
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/io.h
@@ -0,0 +1,63 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_IO_H
+#define _JAILHOUSE_ASM_IO_H
+
+#include <asm/types.h>
+
+#ifndef __ASSEMBLY__
+
+static inline void writeb_relaxed(u8 val, volatile void *addr)
+{
+ *(volatile u8 *)addr = val;
+}
+
+static inline void writew_relaxed(u16 val, volatile void *addr)
+{
+ *(volatile u16 *)addr = val;
+}
+
+static inline void writel_relaxed(u32 val, volatile void *addr)
+{
+ *(volatile u32 *)addr = val;
+}
+
+static inline void writeq_relaxed(u64 val, volatile void *addr)
+{
+ /* Warning: no guarantee of atomicity */
+ *(volatile u64 *)addr = val;
+}
+
+static inline u8 readb_relaxed(volatile void *addr)
+{
+ return *(volatile u8 *)addr;
+}
+
+static inline u16 readw_relaxed(volatile void *addr)
+{
+ return *(volatile u16 *)addr;
+}
+
+static inline u32 readl_relaxed(volatile void *addr)
+{
+ return *(volatile u32 *)addr;
+}
+
+static inline u64 readq_relaxed(volatile void *addr)
+{
+ /* Warning: no guarantee of atomicity */
+ return *(volatile u64 *)addr;
+}
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_IO_H */
These are identical to what we already have in jailhouse/mmio.h, no?
Yes, but jailhouse/mmio.h wasn't exhaustive enough when I started the
port, so I inspired from Linux's asm/io.h, which contains a few
subtleties, that I removed afterwards (cf. commit log).
The '_relaxed' suffix could help remembering that the helper doesn't
prevent any memory reordering, but it isn't useful in the initial
patches.

I think it's safe enough and much clearer to use the merged version from
your rebase, for the moment.

Cheers,
Jean-Philippe
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:59 UTC
Permalink
Initial code for handling hypervisor traps. Only the non-banked registers
need to be saved in the low-level handler, the rest of the context won't
be overwritten.
The per-cpu datas are loaded from TPIDR_EL2 and the general-purpose
registers are saved directly on the stack and supplied as 'struct
registers' to the dispatcher.
The latter then inspects the ESR value and calls the core accordingly.
The return value and the general-purpose registers are passed back to the
driver by retrieving 'struct registers' from the stack, before doing the
final ERET.
This patch allows to handle all status querying hypercalls.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 2 +-
hypervisor/arch/arm/exception.S | 15 +++++++-
hypervisor/arch/arm/include/asm/control.h | 1 +
hypervisor/arch/arm/include/asm/processor.h | 32 +++++++++++++++++
hypervisor/arch/arm/include/asm/sysregs.h | 1 +
hypervisor/arch/arm/include/asm/traps.h | 43 ++++++++++++++++++++++
hypervisor/arch/arm/traps.c | 52 +++++++++++++++++++++++++++
7 files changed, 144 insertions(+), 2 deletions(-)
create mode 100644 hypervisor/arch/arm/include/asm/traps.h
create mode 100644 hypervisor/arch/arm/traps.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 9bc393e..0016e15 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -14,7 +14,7 @@ KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))

always := built-in.o

-obj-y := entry.o dbg-write.o exception.o setup.o lib.o
+obj-y := entry.o dbg-write.o exception.o setup.o lib.o traps.o
obj-y += paging.o mmu_hyp.o mmu_cell.o
obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o

diff --git a/hypervisor/arch/arm/exception.S b/hypervisor/arch/arm/exception.S
index 8c483aa..075172b 100644
--- a/hypervisor/arch/arm/exception.S
+++ b/hypervisor/arch/arm/exception.S
@@ -11,6 +11,7 @@
*/

#include <asm/head.h>
+#include <asm/sysregs.h>

.text
.globl hyp_vectors
@@ -21,6 +22,18 @@ hyp_vectors:
b .
b .
b .
+ b hyp_trap
b .
b .
- b .
+
+hyp_trap:
+ /* Fill the struct registers. Should comply with NUM_USR_REGS */
+ push {r0-r12, lr}
+
+ arm_read_sysreg(TPIDR_EL2, r0)
+ mov r1, sp
+ bl arch_handle_trap
+
+ /* Restore usr regs */
+ pop {r0-r12, lr}
+ eret
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index b569cba..4903d87 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -20,6 +20,7 @@

int arch_mmu_cell_init(struct cell *cell);
int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data);
+void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs);

#endif /* !__ASSEMBLY__ */

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 7835fc4..6dbcd07 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -98,9 +98,41 @@
#define PAR_ATTR_SHIFT 56
#define PAR_ATTR_MASK 0xff

+/* exception class */
+#define ESR_EC_SHIFT 26
+#define ESR_EC(hsr) ((hsr) >> ESR_EC_SHIFT & 0x3f)
+/* instruction length */
+#define ESR_IL_SHIFT 25
+#define ESR_IL(hsr) ((hsr) >> ESR_IL_SHIFT & 0x1)
+/* Instruction specific */
+#define ESR_ICC_MASK 0x1ffffff
+#define ESR_ICC(hsr) ((hsr) & ESR_ICC_MASK)
+/* Exception classes values */
+#define ESR_EC_UNK 0x00
+#define ESR_EC_WFI 0x01
+#define ESR_EC_CP15_32 0x03
+#define ESR_EC_CP15_64 0x04
+#define ESR_EC_CP14_32 0x05
+#define ESR_EC_CP14_LC 0x06
+#define ESR_EC_HCPTR 0x07
+#define ESR_EC_CP10 0x08
+#define ESR_EC_CP14_64 0x0c
+#define ESR_EC_SVC_HYP 0x11
+#define ESR_EC_HVC 0x12
+#define ESR_EC_SMC 0x13
+#define ESR_EC_IABT 0x20
+#define ESR_EC_IABT_HYP 0x21
+#define ESR_EC_PCALIGN 0x22
+#define ESR_EC_DABT 0x24
+#define ESR_EC_DABT_HYP 0x25
+
+#define NUM_USR_REGS 14
+
#ifndef __ASSEMBLY__

struct registers {
+ /* r0 - r12 and lr. The other registers are banked. */
+ unsigned long usr[NUM_USR_REGS];
};

#define dmb(domain) asm volatile("dmb " #domain "\n" ::: "memory")
diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index 1679734..b2aaf06 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -33,6 +33,7 @@
#define ID_PFR0_EL1 SYSREG_32(0, c0, c1, 0)
#define ID_PFR1_EL1 SYSREG_32(0, c0, c1, 1)
#define SCTLR_EL2 SYSREG_32(4, c1, c0, 0)
+#define ESR_EL2 SYSREG_32(4, c5, c2, 0)
#define TPIDR_EL2 SYSREG_32(4, c13, c0, 2)
#define TTBR0_EL2 SYSREG_64(4, c2)
#define TCR_EL2 SYSREG_32(4, c2, c0, 2)
diff --git a/hypervisor/arch/arm/include/asm/traps.h b/hypervisor/arch/arm/include/asm/traps.h
new file mode 100644
index 0000000..9bab7e9
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/traps.h
@@ -0,0 +1,43 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_TRAPS_H
+#define _JAILHOUSE_ASM_TRAPS_H
+
+#include <asm/head.h>
+#include <asm/percpu.h>
+#include <asm/types.h>
+
+#ifndef __ASSEMBLY__
+
+enum trap_return {
+ TRAP_HANDLED = 1,
+ TRAP_UNHANDLED = 0,
+};
+
+struct trap_context {
+ unsigned long *regs;
+ u32 esr;
+ u32 cpsr;
+};
+
+typedef int (*trap_handler)(struct per_cpu *cpu_data,
+ struct trap_context *ctx);
+
+#define arm_read_banked_reg(reg, val) \
+ asm volatile ("mrs %0, " #reg "\n" : "=r" (val))
+
+#define arm_write_banked_reg(reg, val) \
+ asm volatile ("msr " #reg ", %0\n" : : "r" (val))
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_TRAPS_H */
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
new file mode 100644
index 0000000..95628c6
--- /dev/null
+++ b/hypervisor/arch/arm/traps.c
@@ -0,0 +1,52 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/control.h>
+#include <asm/traps.h>
+#include <asm/sysregs.h>
+#include <jailhouse/printk.h>
+#include <jailhouse/control.h>
+
+static int arch_handle_hvc(struct per_cpu *cpu_data, struct trap_context *ctx)
+{
+ unsigned long *regs = ctx->regs;
+
+ regs[0] = hypercall(cpu_data, regs[0], regs[1], regs[2]);
+
+ return TRAP_HANDLED;
+}
+
+static const trap_handler trap_handlers[38] =
+{
+ [ESR_EC_HVC] = arch_handle_hvc,
+};
+
+void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs)
+{
+ struct trap_context ctx;
+ u32 exception_class;
+ int ret = TRAP_UNHANDLED;
+
+ arm_read_banked_reg(SPSR_hyp, ctx.cpsr);
+ arm_read_sysreg(ESR_EL2, ctx.esr);
+ exception_class = ESR_EC(ctx.esr);
+ ctx.regs = guest_regs->usr;
+
+ if (trap_handlers[exception_class])
+ ret = trap_handlers[exception_class](cpu_data, &ctx);
+
+ if (ret != TRAP_HANDLED) {
+ panic_printk("CPU%d: Unhandled HYP trap, syndrome 0x%x\n",
+ cpu_data->cpu_id, ctx.esr);
+ while(1);
+ }
+}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:54 UTC
Permalink
This patch adds the ability to implement EL2 stage-1 and EL1 stage-2
MMU, using the struct paging defined by the core.
In the future, it may be useful to separate the stage1 and stage2
functions, like the x86 implementation, in order to use less translation
levels for the IPA->PA translation. For the moment, we keep the
hv_paging = arm_paging definition.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 2 +-
hypervisor/arch/arm/include/asm/paging.h | 149 ++++++++++++++++++++++--
hypervisor/arch/arm/include/asm/paging_modes.h | 5 +
hypervisor/arch/arm/paging.c | 148 +++++++++++++++++++++++
hypervisor/arch/arm/setup.c | 4 -
5 files changed, 295 insertions(+), 13 deletions(-)
create mode 100644 hypervisor/arch/arm/paging.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 41e3394..b8cc50b 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -15,7 +15,7 @@ KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))
always := built-in.o

obj-y := entry.o dbg-write.o exception.o setup.o lib.o
-obj-y += mmu_hyp.o
+obj-y += paging.o mmu_hyp.o
obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o

# Needed for kconfig
diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 5b48790..251576e 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -13,21 +13,154 @@
#ifndef _JAILHOUSE_ASM_PAGING_H
#define _JAILHOUSE_ASM_PAGING_H

-#include <asm/types.h>
#include <asm/processor.h>
+#include <asm/types.h>
+#include <jailhouse/utils.h>

#define PAGE_SIZE 4096
#define PAGE_MASK ~(PAGE_SIZE - 1)
#define PAGE_OFFS_MASK (PAGE_SIZE - 1)

-#define MAX_PAGE_DIR_LEVELS 4
+#define MAX_PAGE_DIR_LEVELS 3
+
+/*
+ * When T0SZ == 0 and SL0 == 0, the EL2 MMU starts the IPA->PA translation at
+ * the level 2 table. The second table is indexed by IPA[31:21], the third one
+ * by IPA[20:12].
+ * This would allows to cover a 4GB memory map by using 4 concatenated level-2
+ * page tables and thus provide better table walk performances.
+ * For the moment, the core doesn't allow to use concatenated pages, so we will
+ * use three levels instead, starting at level 1.
+ *
+ * TODO: add a "u32 concatenated" field to the paging struct
+ */
+#if MAX_PAGE_DIR_LEVELS < 3
+#define T0SZ 0
+#define SL0 0
+#define PADDR_OFF (14 - T0SZ)
+#define L2_VADDR_MASK BIT_MASK(21, 17 + PADDR_OFF)
+#else
+#define T0SZ 0
+#define SL0 1
+#define PADDR_OFF (5 - T0SZ)
+#define L1_VADDR_MASK BIT_MASK(26 + PADDR_OFF, 30)
+#define L2_VADDR_MASK BIT_MASK(29, 21)
+#endif
+
+#define L3_VADDR_MASK BIT_MASK(20, 12)
+
+/*
+ * Stage-1 and Stage-2 lower attributes.
+ * FIXME: The upper attributes (contiguous hint and XN) are not currently in
+ * use. If needed in the future, they should be shifted towards the lower word,
+ * since the core uses unsigned long to pass the flags.
+ * An arch-specific typedef for the flags as well as the addresses would be
+ * useful.
+ * The contiguous bit is a hint that allows the PE to store blocks of 16 pages
+ * in the TLB. This may be a useful optimisation.
+ */
+#define PTE_ACCESS_FLAG (0x1 << 10)
+/*
+ * When combining shareability attributes, the stage-1 ones prevail. So we can
+ * safely leave everything non-shareable at stage 2.
+ */
+#define PTE_NON_SHAREABLE (0x0 << 8)
+#define PTE_OUTER_SHAREABLE (0x2 << 8)
+#define PTE_INNER_SHAREABLE (0x3 << 8)
+
+#define PTE_MEMATTR(val) ((val) << 2)
+#define PTE_FLAG_TERMINAL (0x1 << 1)
+#define PTE_FLAG_VALID (0x1 << 0)
+
+/* These bits differ in stage 1 and 2 translations */
+#define S1_PTE_NG (0x1 << 11)
+#define S1_PTE_ACCESS_RW (0x0 << 7)
+#define S1_PTE_ACCESS_RO (0x1 << 7)
+/* Res1 for EL2 stage-1 tables */
+#define S1_PTE_ACCESS_EL0 (0x1 << 6)
+
+#define S2_PTE_ACCESS_RO (0x1 << 6)
+#define S2_PTE_ACCESS_WO (0x2 << 6)
+#define S2_PTE_ACCESS_RW (0x3 << 6)
+
+/*
+ * Descriptor pointing to a page table
+ * (only for L1 and L2. L3 uses this encoding for terminal entries...)
+ */
+#define PTE_TABLE_FLAGS 0x3
+
+#define PTE_L1_BLOCK_ADDR_MASK BIT_MASK(39, 30)
+#define PTE_L2_BLOCK_ADDR_MASK BIT_MASK(39, 21)
+#define PTE_TABLE_ADDR_MASK BIT_MASK(39, 12)
+#define PTE_PAGE_ADDR_MASK BIT_MASK(39, 12)
+
+#define BLOCK_1G_VADDR_MASK BIT_MASK(29, 0)
+#define BLOCK_2M_VADDR_MASK BIT_MASK(20, 0)
+
+#define TTBR_MASK BIT_MASK(47, PADDR_OFF)
+
+#define HTCR_RES1 ((1 << 31) | (1 << 23))
+#define VTCR_RES1 ((1 << 31))
+#define TCR_RGN_NON_CACHEABLE 0x0
+#define TCR_RGN_WB_WA 0x1
+#define TCR_RGN_WT 0x2
+#define TCR_RGN_WB 0x3
+#define TCR_NON_SHAREABLE 0x0
+#define TCR_OUTER_SHAREABLE 0x2
+#define TCR_INNER_SHAREABLE 0x3
+
+#define TCR_SH0_SHIFT 12
+#define TCR_ORGN0_SHIFT 10
+#define TCR_IRGN0_SHIFT 8
+#define TCR_SL0_SHIFT 6
+#define TCR_S_SHIFT 4
+
+/*
+ * Memory attribute indexes:
+ * 0: normal WB, RA, WA, non-transient
+ * 1: dev-nGnRE
+ * 2: normal non-cacheable
+ * 3: normal WT, RA, transient
+ * 4: normal WB, WA, non-transient
+ * 5: normal WB, RA, non-transient
+ * 6: dev-nGnRnE
+ * 7: dev-nGnRnE (unused)
+ */
+#define MEMATTR_WBRAWA 0xff
+#define MEMATTR_DEV_nGnRE 0x04
+#define MEMATTR_NC 0x44
+#define MEMATTR_WTRA 0xaa
+#define MEMATTR_WBWA 0x55
+#define MEMATTR_WBRA 0xee
+#define MEMATTR_DEV_nGnRnE 0x00
+
+#define DEFAULT_HMAIR0 0xaa4404ff
+#define DEFAULT_HMAIR1 0x0000ee55
+#define HMAIR_IDX_WBRAWA 0
+#define HMAIR_IDX_DEV_nGnRE 1
+#define HMAIR_IDX_NC 2
+#define HMAIR_IDX_WTRA 3
+#define HMAIR_IDX_WBWA 4
+#define HMAIR_IDX_WBRA 5
+#define HMAIR_IDX_DEV_nGnRnE 6
+
+
+#define S1_PTE_FLAG_NORMAL PTE_MEMATTR(HMAIR_IDX_WBRAWA)
+#define S1_PTE_FLAG_DEVICE PTE_MEMATTR(HMAIR_IDX_DEV_nGnRE)
+#define S1_PTE_FLAG_UNCACHED PTE_MEMATTR(HMAIR_IDX_NC)
+
+#define S2_PTE_FLAG_NORMAL PTE_MEMATTR(MEMATTR_WBRAWA)
+#define S2_PTE_FLAG_DEVICE PTE_MEMATTR(MEMATTR_DEV_nGnRE)
+#define S2_PTE_FLAG_NC PTE_MEMATTR(MEMATTR_NC)

-#define PAGE_FLAG_PRESENT 0x01
-#define PAGE_FLAG_RW 0x02
-#define PAGE_FLAG_UNCACHED 0x10
+#define S1_DEFAULT_FLAGS (PTE_FLAG_VALID | PTE_ACCESS_FLAG \
+ | S1_PTE_FLAG_NORMAL | PTE_INNER_SHAREABLE\
+ | S1_PTE_ACCESS_EL0)

-#define PAGE_DEFAULT_FLAGS (PAGE_FLAG_PRESENT | PAGE_FLAG_RW)
-#define PAGE_READONLY_FLAGS PAGE_FLAG_PRESENT
+/* Macros used by the core, only for the EL2 stage-1 mappings */
+#define PAGE_FLAG_UNCACHED S1_PTE_FLAG_NC
+#define PAGE_DEFAULT_FLAGS (S1_DEFAULT_FLAGS | S1_PTE_ACCESS_RW)
+#define PAGE_READONLY_FLAGS (S1_DEFAULT_FLAGS | S1_PTE_ACCESS_RO)
#define PAGE_NONPRESENT_FLAGS 0

#define INVALID_PHYS_ADDR (~0UL)
@@ -39,7 +172,7 @@

#ifndef __ASSEMBLY__

-typedef unsigned long *pt_entry_t;
+typedef u64 *pt_entry_t;

static inline void arch_tlb_flush_page(unsigned long addr)
{
diff --git a/hypervisor/arch/arm/include/asm/paging_modes.h b/hypervisor/arch/arm/include/asm/paging_modes.h
index 932fb6e..72950eb 100644
--- a/hypervisor/arch/arm/include/asm/paging_modes.h
+++ b/hypervisor/arch/arm/include/asm/paging_modes.h
@@ -10,8 +10,13 @@
* the COPYING file in the top-level directory.
*/

+#ifndef __ASSEMBLY__
+
#include <jailhouse/paging.h>

+/* Long-descriptor paging */
extern const struct paging arm_paging[];

#define hv_paging arm_paging
+
+#endif /* !__ASSEMBLY__ */
diff --git a/hypervisor/arch/arm/paging.c b/hypervisor/arch/arm/paging.c
new file mode 100644
index 0000000..b4ac06f
--- /dev/null
+++ b/hypervisor/arch/arm/paging.c
@@ -0,0 +1,148 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <jailhouse/paging.h>
+
+static bool arm_entry_valid(pt_entry_t entry)
+{
+ return *entry & 1;
+}
+
+static unsigned long arm_get_entry_flags(pt_entry_t entry)
+{
+ /* Upper flags (contiguous hint and XN are currently ignored */
+ return *entry & 0xfff;
+}
+
+static void arm_clear_entry(pt_entry_t entry)
+{
+ *entry = 0;
+}
+
+static bool arm_page_table_empty(page_table_t page_table)
+{
+ unsigned long n;
+ pt_entry_t pte;
+
+ for (n = 0, pte = page_table; n < PAGE_SIZE / sizeof(pt_entry_t); n++, pte++)
+ if (arm_entry_valid(pte))
+ return false;
+ return true;
+}
+
+#if MAX_PAGE_DIR_LEVELS > 2
+static pt_entry_t arm_get_l1_entry(page_table_t page_table, unsigned long virt)
+{
+ return &page_table[(virt & L1_VADDR_MASK) >> 30];
+}
+
+static void arm_set_l1_block(pt_entry_t pte, unsigned long phys, unsigned long flags)
+{
+ *pte = ((u64)phys & PTE_L1_BLOCK_ADDR_MASK) | flags;
+}
+
+static unsigned long arm_get_l1_phys(pt_entry_t pte, unsigned long virt)
+{
+ if ((*pte & PTE_TABLE_FLAGS) == PTE_TABLE_FLAGS)
+ return INVALID_PHYS_ADDR;
+ return (*pte & PTE_L1_BLOCK_ADDR_MASK) | (virt & BLOCK_1G_VADDR_MASK);
+}
+#endif
+
+static pt_entry_t arm_get_l2_entry(page_table_t page_table, unsigned long virt)
+{
+ return &page_table[(virt & L2_VADDR_MASK) >> 21];
+}
+
+static pt_entry_t arm_get_l3_entry(page_table_t page_table, unsigned long virt)
+{
+ return &page_table[(virt & L3_VADDR_MASK) >> 12];
+}
+
+static void arm_set_l2_block(pt_entry_t pte, unsigned long phys, unsigned long flags)
+{
+ *pte = ((u64)phys & PTE_L2_BLOCK_ADDR_MASK) | flags;
+}
+
+static void arm_set_l3_page(pt_entry_t pte, unsigned long phys, unsigned long flags)
+{
+ *pte = ((u64)phys & PTE_PAGE_ADDR_MASK) | flags | PTE_FLAG_TERMINAL;
+}
+
+static void arm_set_l12_table(pt_entry_t pte, unsigned long next_pt)
+{
+ *pte = ((u64)next_pt & PTE_TABLE_ADDR_MASK) | PTE_TABLE_FLAGS;
+}
+
+static unsigned long arm_get_l12_table(pt_entry_t pte)
+{
+ return *pte & PTE_TABLE_ADDR_MASK;
+}
+
+static unsigned long arm_get_l2_phys(pt_entry_t pte, unsigned long virt)
+{
+ if ((*pte & PTE_TABLE_FLAGS) == PTE_TABLE_FLAGS)
+ return INVALID_PHYS_ADDR;
+ return (*pte & PTE_L2_BLOCK_ADDR_MASK) | (virt & BLOCK_2M_VADDR_MASK);
+}
+
+static unsigned long arm_get_l3_phys(pt_entry_t pte, unsigned long virt)
+{
+ if (!(*pte & PTE_FLAG_TERMINAL))
+ return INVALID_PHYS_ADDR;
+ return (*pte & PTE_PAGE_ADDR_MASK) | (virt & PAGE_MASK);
+}
+
+#define ARM_PAGING_COMMON \
+ .entry_valid = arm_entry_valid, \
+ .get_flags = arm_get_entry_flags, \
+ .clear_entry = arm_clear_entry, \
+ .page_table_empty = arm_page_table_empty,
+
+const struct paging arm_paging[] = {
+#if MAX_PAGE_DIR_LEVELS > 2
+ {
+ ARM_PAGING_COMMON
+ /* Block entry: 1GB */
+ .page_size = 1024 * 1024 * 1024,
+ .get_entry = arm_get_l1_entry,
+ .set_terminal = arm_set_l1_block,
+ .get_phys = arm_get_l1_phys,
+
+ .set_next_pt = arm_set_l12_table,
+ .get_next_pt = arm_get_l12_table,
+ },
+#endif
+ {
+ ARM_PAGING_COMMON
+ /* Block entry: 2MB */
+ .page_size = 2 * 1024 * 1024,
+ .get_entry = arm_get_l2_entry,
+ .set_terminal = arm_set_l2_block,
+ .get_phys = arm_get_l2_phys,
+
+ .set_next_pt = arm_set_l12_table,
+ .get_next_pt = arm_get_l12_table,
+ },
+ {
+ ARM_PAGING_COMMON
+ /* Page entry: 4kB */
+ .page_size = 4 * 1024,
+ .get_entry = arm_get_l3_entry,
+ .set_terminal = arm_set_l3_page,
+ .get_phys = arm_get_l3_phys,
+ }
+};
+
+void arch_paging_init(void)
+{
+}
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 599ad39..43ef1eb 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -86,9 +86,5 @@ void arch_shutdown(void) {}
unsigned long arch_page_map_gphys2phys(struct per_cpu *cpu_data,
unsigned long gphys)
{ return INVALID_PHYS_ADDR; }
-void arch_paging_init(void) { }
-
-const struct paging arm_paging[1];
-
void arch_panic_stop(struct per_cpu *cpu_data) {__builtin_unreachable();}
void arch_panic_halt(struct per_cpu *cpu_data) {}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:09 UTC
Permalink
This is not the final thing: it can only be used internally for
suspending, resuming and parking CPUs while reconfiguring the cells.
Using this base, a trivial PSCI 0.2 emulation can be added by implementing
the appropriate trap hooks.

The mailbox in the per_cpu structure is used to store the address and
context where psci_cpu_off returns.
CPUs doing reconfiguration on the cells can use the psci_suspend and
psci_resume wrappers to store all affected cores.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 1 +
hypervisor/arch/arm/include/asm/percpu.h | 4 ++
hypervisor/arch/arm/include/asm/psci.h | 64 ++++++++++++++++++++++++++
hypervisor/arch/arm/psci.c | 74 ++++++++++++++++++++++++++++++
hypervisor/arch/arm/psci_low.S | 71 ++++++++++++++++++++++++++++
hypervisor/arch/arm/setup.c | 2 +
6 files changed, 216 insertions(+)
create mode 100644 hypervisor/arch/arm/include/asm/psci.h
create mode 100644 hypervisor/arch/arm/psci.c
create mode 100644 hypervisor/arch/arm/psci_low.S

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 641b55d..6ad6b47 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -16,6 +16,7 @@ always := built-in.o

obj-y := entry.o dbg-write.o exception.o setup.o control.o lib.o traps.o
obj-y += paging.o mmu_hyp.o mmu_cell.o
+obj-y += psci.o psci_low.o
obj-y += irqchip.o
obj-$(CONFIG_ARM_GIC_V3) += gic-v3.o
obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index 16750c0..53bf97f 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -26,6 +26,7 @@
#ifndef __ASSEMBLY__

#include <asm/cell.h>
+#include <asm/psci.h>
#include <asm/spinlock.h>

struct pending_irq;
@@ -53,6 +54,9 @@ struct per_cpu {

bool initialized;

+ /* The mbox will be accessed with a ldrd, which requires alignment */
+ __attribute__((aligned(8))) struct psci_mbox psci_mbox;
+
volatile bool stop_cpu;
volatile bool wait_for_sipi;
volatile bool cpu_stopped;
diff --git a/hypervisor/arch/arm/include/asm/psci.h b/hypervisor/arch/arm/include/asm/psci.h
new file mode 100644
index 0000000..1883a6d
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/psci.h
@@ -0,0 +1,64 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_PSCI_H
+#define _JAILHOUSE_ASM_PSCI_H
+
+#define PSCI_VERSION 0x84000000
+#define PSCI_CPU_SUSPEND_32 0x84000001
+#define PSCI_CPU_SUSPEND_64 0xc4000001
+#define PSCI_CPU_OFF 0x84000002
+#define PSCI_CPU_ON_32 0x84000003
+#define PSCI_CPU_ON_64 0xc4000003
+#define PSCI_AFFINITY_INFO_32 0x84000004
+#define PSCI_AFFINITY_INFO_64 0xc4000004
+#define PSCI_MIGRATE_32 0x84000005
+#define PSCI_MIGRATE_64 0xc4000005
+#define PSCI_MIGRATE_INFO_TYPE 0x84000006
+#define PSCI_MIGRATE_INFO_UP_CPU_32 0x84000007
+#define PSCI_MIGRATE_INFO_UP_CPU_64 0xc4000007
+#define PSCI_SYSTEM_OFF 0x84000008
+#define PSCI_SYSTEM_RESET 0x84000009
+
+#define PSCI_SUCCESS 0
+#define PSCI_NOT_SUPPORTED (-1)
+#define PSCI_INVALID_PARAMETERS (-2)
+#define PSCI_DENIED (-3)
+#define PSCI_ALREADY_ON (-4)
+#define PSCI_ON_PENDING (-5)
+#define PSCI_INTERNAL_FAILURE (-6)
+#define PSCI_NOT_PRESENT (-7)
+#define PSCI_DISABLED (-8)
+
+#define PSCI_INVALID_ADDRESS 0xffffffff
+
+#ifndef __ASSEMBLY__
+
+struct trap_context;
+struct per_cpu;
+struct psci_mbox {
+ unsigned long entry;
+ unsigned long context;
+};
+
+void psci_cpu_off(struct per_cpu *cpu_data);
+long psci_cpu_on(unsigned int target, unsigned long entry,
+ unsigned long context);
+bool psci_cpu_stopped(unsigned int cpu_id);
+int psci_wait_cpu_stopped(unsigned int cpu_id);
+
+void psci_suspend(struct per_cpu *cpu_data);
+long psci_resume(unsigned int target);
+long psci_try_resume(unsigned int cpu_id);
+
+#endif /* !__ASSEMBLY__ */
+#endif /* _JAILHOUSE_ASM_PSCI_H */
diff --git a/hypervisor/arch/arm/psci.c b/hypervisor/arch/arm/psci.c
new file mode 100644
index 0000000..132d6a0
--- /dev/null
+++ b/hypervisor/arch/arm/psci.c
@@ -0,0 +1,74 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/control.h>
+#include <asm/psci.h>
+#include <asm/traps.h>
+
+void _psci_cpu_off(struct psci_mbox *);
+long _psci_cpu_on(struct psci_mbox *, unsigned long, unsigned long);
+void _psci_suspend(struct psci_mbox *, unsigned long *address);
+void _psci_suspend_return(void);
+
+void psci_cpu_off(struct per_cpu *cpu_data)
+{
+ _psci_cpu_off(&cpu_data->psci_mbox);
+}
+
+long psci_cpu_on(unsigned int target, unsigned long entry,
+ unsigned long context)
+{
+ struct per_cpu *cpu_data = per_cpu(target);
+ struct psci_mbox *mbox = &cpu_data->psci_mbox;
+
+ return _psci_cpu_on(mbox, entry, context);
+}
+
+/*
+ * Not a real psci_cpu_suspend implementation. Only used to semantically
+ * differentiate from `cpu_off'. Return is done via psci_resume.
+ */
+void psci_suspend(struct per_cpu *cpu_data)
+{
+ psci_cpu_off(cpu_data);
+}
+
+long psci_resume(unsigned int target)
+{
+ psci_wait_cpu_stopped(target);
+ return psci_cpu_on(target, (unsigned long)&_psci_suspend_return, 0);
+}
+
+bool psci_cpu_stopped(unsigned int cpu_id)
+{
+ return per_cpu(cpu_id)->psci_mbox.entry == PSCI_INVALID_ADDRESS;
+}
+
+long psci_try_resume(unsigned int cpu_id)
+{
+ if (psci_cpu_stopped(cpu_id))
+ return psci_resume(cpu_id);
+
+ return -EBUSY;
+}
+
+int psci_wait_cpu_stopped(unsigned int cpu_id)
+{
+ /* FIXME: add a delay */
+ do {
+ if (psci_cpu_stopped(cpu_id))
+ return 0;
+ cpu_relax();
+ } while (1);
+
+ return -EBUSY;
+}
diff --git a/hypervisor/arch/arm/psci_low.S b/hypervisor/arch/arm/psci_low.S
new file mode 100644
index 0000000..76eeaba
--- /dev/null
+++ b/hypervisor/arch/arm/psci_low.S
@@ -0,0 +1,71 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/head.h>
+#include <asm/psci.h>
+
+ .global _psci_cpu_off
+ /* r0: struct psci_mbox* */
+_psci_cpu_off:
+ ldr r2, =PSCI_INVALID_ADDRESS
+ /* Clear mbox */
+ str r2, [r0]
+ /*
+ * No reordering against the ldr below for the PEs in our domain, so no
+ * need for a barrier. Other CPUs will wait for an invalid address
+ * before issuing a CPU_ON.
+ */
+
+ /* Wait for a CPU_ON call that updates the mbox */
+1: wfe
+ ldr r1, [r0]
+ cmp r1, r2
+ beq 1b
+
+ /* Jump to the requested entry, with a parameter */
+ ldr r0, [r0, #4]
+ bx r1
+
+ .global _psci_cpu_on
+ /* r0: struct psci_mbox*, r1: entry, r2: context */
+_psci_cpu_on:
+ push {r4, r5, lr}
+ /* strd needs to start with an even register */
+ mov r3, r2
+ mov r2, r1
+ ldr r1, =PSCI_INVALID_ADDRESS
+
+ ldrexd r4, r5, [r0]
+ cmp r4, r1
+ bne store_failed
+ strexd r1, r2, r3, [r0]
+ /* r1 contains the ex store flag */
+ cmp r1, #0
+ bne store_failed
+
+ /*
+ * Ensure that the stopped CPU can read the new address when receiving
+ * the event.
+ */
+ dsb ish
+ sev
+ mov r0, #0
+ pop {r4, r5, pc}
+
+store_failed:
+ clrex
+ mov r0, #PSCI_ALREADY_ON
+ pop {r4, r5, pc}
+
+ .global _psci_suspend_return
+_psci_suspend_return:
+ bx lr
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index d293c2c..e7a0845 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -53,6 +53,8 @@ int arch_cpu_init(struct per_cpu *cpu_data)
int err = 0;
unsigned long hcr = HCR_VM_BIT | HCR_IMO_BIT | HCR_FMO_BIT;

+ cpu_data->psci_mbox.entry = 0;
+
/*
* Copy the registers to restore from the linux stack here, because we
* won't be able to access it later
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:47 UTC
Permalink
memcpy and phys_processor_id implementations are required before going
any further. This patch introduces a very trivial version of those
functions.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 2 +-
hypervisor/arch/arm/include/asm/processor.h | 2 ++
hypervisor/arch/arm/include/asm/sysregs.h | 5 ++++
hypervisor/arch/arm/lib.c | 36 +++++++++++++++++++++++++++
hypervisor/arch/arm/setup.c | 2 --
5 files changed, 44 insertions(+), 3 deletions(-)
create mode 100644 hypervisor/arch/arm/lib.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 6ceb061..425f221 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -14,4 +14,4 @@ KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))

always := built-in.o

-obj-y := entry.o setup.o
+obj-y := entry.o setup.o lib.o
diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index d42be81..ef76687 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -13,6 +13,8 @@
#ifndef _JAILHOUSE_ASM_PROCESSOR_H
#define _JAILHOUSE_ASM_PROCESSOR_H

+#define MPIDR_CPUID_MASK 0x00ffffff
+
#ifndef __ASSEMBLY__

struct registers {
diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index 8be5ce1..b5dddcf 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -25,6 +25,11 @@
* - arm_read_sysreg(SYSREG_NAME, reg)
*/

+/*
+ * 32bit sysregs definitions
+ * (Use the AArch64 names to ease the compatibility work)
+ */
+#define MPIDR_EL1 SYSREG_32(0, c0, c0, 5)

#define SYSREG_32(...) 32, __VA_ARGS__
#define SYSREG_64(...) 64, __VA_ARGS__
diff --git a/hypervisor/arch/arm/lib.c b/hypervisor/arch/arm/lib.c
new file mode 100644
index 0000000..a0b2b5b
--- /dev/null
+++ b/hypervisor/arch/arm/lib.c
@@ -0,0 +1,36 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/sysregs.h>
+#include <asm/types.h>
+#include <jailhouse/processor.h>
+#include <jailhouse/string.h>
+
+int phys_processor_id(void)
+{
+ u32 mpidr;
+
+ arm_read_sysreg(MPIDR_EL1, mpidr);
+ return mpidr & MPIDR_CPUID_MASK;
+}
+
+void *memcpy(void *dest, const void *src, unsigned long n)
+{
+ unsigned long i;
+ const char *csrc = src;
+ char *cdest = dest;
+
+ for (i = 0; i < n; i++)
+ cdest[i] = csrc[i];
+
+ return dest;
+}
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index a282685..74dc0e6 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -43,7 +43,6 @@ void arch_cpu_restore(struct per_cpu *cpu_data)
#include <jailhouse/string.h>
#include <jailhouse/paging.h>
void arch_dbg_write_init(void) {}
-int phys_processor_id(void) { return 0; }
void arch_suspend_cpu(unsigned int cpu_id) {}
void arch_resume_cpu(unsigned int cpu_id) {}
void arch_reset_cpu(unsigned int cpu_id) {}
@@ -60,7 +59,6 @@ int arch_unmap_memory_region(struct cell *cell,
void arch_cell_destroy(struct per_cpu *cpu_data, struct cell *new_cell) {}
void arch_config_commit(struct per_cpu *cpu_data,
struct cell *cell_added_removed) {}
-void *memcpy(void *dest, const void *src, unsigned long n) { return NULL; }
void arch_dbg_write(const char *msg) {}
void arch_shutdown(void) {}
unsigned long arch_page_map_gphys2phys(struct per_cpu *cpu_data,
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:03 UTC
Permalink
Since IRQs taken to HYP use a different vector, the trap handler needs to
be aware of the exit context. To this end, the patch adds an 'exit_reason'
field to the struct registers.
The structure is still passed to the dispatcher as a pointer to the stack,
but care must be taken to ignore the exit field when restoring the user
registers.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 2 +-
hypervisor/arch/arm/control.c | 31 +++++++++++++++++++++++++++
hypervisor/arch/arm/exception.S | 20 ++++++++++++++---
hypervisor/arch/arm/include/asm/control.h | 1 +
hypervisor/arch/arm/include/asm/processor.h | 4 ++++
hypervisor/arch/arm/setup.c | 2 +-
6 files changed, 55 insertions(+), 5 deletions(-)
create mode 100644 hypervisor/arch/arm/control.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 78890ef..641b55d 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -14,7 +14,7 @@ KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))

always := built-in.o

-obj-y := entry.o dbg-write.o exception.o setup.o lib.o traps.o
+obj-y := entry.o dbg-write.o exception.o setup.o control.o lib.o traps.o
obj-y += paging.o mmu_hyp.o mmu_cell.o
obj-y += irqchip.o
obj-$(CONFIG_ARM_GIC_V3) += gic-v3.o
diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
new file mode 100644
index 0000000..e740977
--- /dev/null
+++ b/hypervisor/arch/arm/control.c
@@ -0,0 +1,31 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/control.h>
+#include <asm/irqchip.h>
+#include <jailhouse/printk.h>
+
+void arch_handle_exit(struct per_cpu *cpu_data, struct registers *regs)
+{
+ switch (regs->exit_reason) {
+ case EXIT_REASON_IRQ:
+ irqchip_handle_irq(cpu_data);
+ break;
+ case EXIT_REASON_TRAP:
+ arch_handle_trap(cpu_data, regs);
+ break;
+ default:
+ printk("Internal error: %d exit not implemented\n",
+ regs->exit_reason);
+ while(1);
+ }
+}
diff --git a/hypervisor/arch/arm/exception.S b/hypervisor/arch/arm/exception.S
index 075172b..230da47 100644
--- a/hypervisor/arch/arm/exception.S
+++ b/hypervisor/arch/arm/exception.S
@@ -11,6 +11,7 @@
*/

#include <asm/head.h>
+#include <asm/processor.h>
#include <asm/sysregs.h>

.text
@@ -23,16 +24,29 @@ hyp_vectors:
b .
b .
b hyp_trap
- b .
+ b hyp_irq
b .

-hyp_trap:
+.macro handle_vmexit exit_reason
/* Fill the struct registers. Should comply with NUM_USR_REGS */
push {r0-r12, lr}
+ mov r0, #\exit_reason
+ b vmexit_common
+.endm
+
+hyp_irq:
+ handle_vmexit EXIT_REASON_IRQ
+hyp_trap:
+ handle_vmexit EXIT_REASON_TRAP
+
+vmexit_common:
+ push {r0}

arm_read_sysreg(TPIDR_EL2, r0)
mov r1, sp
- bl arch_handle_trap
+ bl arch_handle_exit
+
+ add sp, sp, #4

/* Restore usr regs */
pop {r0-r12, lr}
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index 4903d87..c974bc1 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -21,6 +21,7 @@
int arch_mmu_cell_init(struct cell *cell);
int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data);
void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs);
+void arch_handle_exit(struct per_cpu *cpu_data, struct registers *guest_regs);

#endif /* !__ASSEMBLY__ */

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 6dbcd07..5744c0e 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -126,11 +126,15 @@
#define ESR_EC_DABT 0x24
#define ESR_EC_DABT_HYP 0x25

+#define EXIT_REASON_TRAP 0x1
+#define EXIT_REASON_IRQ 0x2
+
#define NUM_USR_REGS 14

#ifndef __ASSEMBLY__

struct registers {
+ unsigned long exit_reason;
/* r0 - r12 and lr. The other registers are banked. */
unsigned long usr[NUM_USR_REGS];
};
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 3afabbf..d293c2c 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -51,7 +51,7 @@ int arch_init_early(void)
int arch_cpu_init(struct per_cpu *cpu_data)
{
int err = 0;
- unsigned long hcr = HCR_VM_BIT;
+ unsigned long hcr = HCR_VM_BIT | HCR_IMO_BIT | HCR_FMO_BIT;

/*
* Copy the registers to restore from the linux stack here, because we
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:07 UTC
Permalink
On some implementations, instructions may trap before the PE was able to
check their conditional code. This patch adds the ability to check it
before emulating something that wouldn't be executed. In thumb mode, the IT
state has to be updated when skipping instructions.

Most of this patch is copied and adapted from Linux and KVM.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/include/asm/processor.h | 5 ++
hypervisor/arch/arm/traps.c | 116 +++++++++++++++++++++++++++
2 files changed, 121 insertions(+)

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 5744c0e..599f4f6 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -31,6 +31,8 @@
#define PSR_F_BIT (1 << 6)
#define PSR_I_BIT (1 << 7)
#define PSR_A_BIT (1 << 8)
+#define PSR_IT_MASK(it) (((it) & 0x3) << 25 | ((it) & 0xfc) << 8)
+#define PSR_IT(psr) (((psr) >> 25 & 0x3) | ((psr) >> 8 & 0xfc))

#define MPIDR_CPUID_MASK 0x00ffffff

@@ -125,6 +127,9 @@
#define ESR_EC_PCALIGN 0x22
#define ESR_EC_DABT 0x24
#define ESR_EC_DABT_HYP 0x25
+/* Condition code */
+#define ESR_ICC_CV_BIT (1 << 24)
+#define ESR_ICC_COND(icc) ((icc) >> 20 & 0xf)

#define EXIT_REASON_TRAP 0x1
#define EXIT_REASON_IRQ 0x2
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index 7367357..9de1657 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -8,6 +8,10 @@
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
+ *
+ * Condition check code is copied from Linux's
+ * - arch/arm/kernel/opcodes.c
+ * - arch/arm/kvm/emulate.c
*/

#include <asm/control.h>
@@ -16,6 +20,107 @@
#include <jailhouse/printk.h>
#include <jailhouse/control.h>

+/*
+ * condition code lookup table
+ * index into the table is test code: EQ, NE, ... LT, GT, AL, NV
+ *
+ * bit position in short is condition code: NZCV
+ */
+static const unsigned short cc_map[16] = {
+ 0xF0F0, /* EQ == Z set */
+ 0x0F0F, /* NE */
+ 0xCCCC, /* CS == C set */
+ 0x3333, /* CC */
+ 0xFF00, /* MI == N set */
+ 0x00FF, /* PL */
+ 0xAAAA, /* VS == V set */
+ 0x5555, /* VC */
+ 0x0C0C, /* HI == C set && Z clear */
+ 0xF3F3, /* LS == C clear || Z set */
+ 0xAA55, /* GE == (N==V) */
+ 0x55AA, /* LT == (N!=V) */
+ 0x0A05, /* GT == (!Z && (N==V)) */
+ 0xF5FA, /* LE == (Z || (N!=V)) */
+ 0xFFFF, /* AL always */
+ 0 /* NV */
+};
+
+/* Check condition field either from ESR or from SPSR in thumb mode */
+static bool arch_failed_condition(struct trap_context *ctx)
+{
+ u32 class = ESR_EC(ctx->esr);
+ u32 icc = ESR_ICC(ctx->esr);
+ u32 cpsr = ctx->cpsr;
+ u32 flags = cpsr >> 28;
+ u32 cond;
+ /*
+ * Trapped instruction is unconditional, already passed the condition
+ * check, or is invalid
+ */
+ if (class & 0x30 || class == 0)
+ return false;
+
+ /* Is condition field valid? */
+ if (icc & ESR_ICC_CV_BIT) {
+ cond = ESR_ICC_COND(icc);
+ } else {
+ /* This can happen in Thumb mode: examine IT state. */
+ unsigned long it = PSR_IT(cpsr);
+
+ /* it == 0 => unconditional. */
+ if (it == 0)
+ return false;
+
+ /* The cond for this insn works out as the top 4 bits. */
+ cond = (it >> 4);
+ }
+
+ /* Compare the apsr flags with the condition code */
+ if ((cc_map[cond] >> flags) & 1)
+ return false;
+
+ return true;
+}
+
+/*
+ * When exceptions occur while instructions are executed in Thumb IF-THEN
+ * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
+ * to do this little bit of work manually. The fields map like this:
+ *
+ * IT[7:0] -> CPSR[26:25],CPSR[15:10]
+ */
+static void arch_advance_itstate(struct trap_context *ctx)
+{
+ unsigned long itbits, cond;
+ unsigned long cpsr = ctx->cpsr;
+
+ if (!(cpsr & PSR_IT_MASK(0xff)))
+ return;
+
+ itbits = PSR_IT(cpsr);
+ cond = itbits >> 5;
+
+ if ((itbits & 0x7) == 0)
+ /* One instruction left in the block, next itstate is 0 */
+ itbits = cond = 0;
+ else
+ itbits = (itbits << 1) & 0x1f;
+
+ itbits |= (cond << 5);
+ cpsr &= ~PSR_IT_MASK(0xff);
+ cpsr |= PSR_IT_MASK(itbits);
+
+ ctx->cpsr = cpsr;
+}
+
+static void arch_skip_instruction(struct trap_context *ctx)
+{
+ u32 instruction_length = ESR_IL(ctx->esr);
+
+ ctx->pc += (instruction_length ? 4 : 2);
+ arch_advance_itstate(ctx);
+}
+
static void access_cell_reg(struct trap_context *ctx, u8 reg,
unsigned long *val, bool is_read)
{
@@ -107,6 +212,15 @@ void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs)
exception_class = ESR_EC(ctx.esr);
ctx.regs = guest_regs->usr;

+ /*
+ * On some implementations, instructions that fail their condition check
+ * can trap.
+ */
+ if (arch_failed_condition(&ctx)) {
+ arch_skip_instruction(&ctx);
+ goto restore_context;
+ }
+
if (trap_handlers[exception_class])
ret = trap_handlers[exception_class](cpu_data, &ctx);

@@ -116,5 +230,7 @@ void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs)
while(1);
}

+restore_context:
+ arm_write_banked_reg(SPSR_hyp, ctx.cpsr);
arm_write_banked_reg(ELR_hyp, ctx.pc);
}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:52 UTC
Permalink
This patch uses the kernel config to detect which UART is available.
Currently, only the vexpress platform is implemented.
It assumes that the first UART uses the default, fixed VA->PA mapping
set by the kernel for the vexpress low-level printk.

This is far from ideal: a clean implementation would either need to
communicate the uart address from the driver, or postpone all debug
printks until the hypervisor is able to use its own mappings, as linux
does when earlyprintk is disabled.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 7 +-
hypervisor/arch/arm/dbg-write-pl011.c | 24 ++++++
hypervisor/arch/arm/dbg-write.c | 46 +++++++++++
hypervisor/arch/arm/include/asm/debug.h | 35 ++++++++
hypervisor/arch/arm/include/asm/platform.h | 27 ++++++
hypervisor/arch/arm/include/asm/uart_pl011.h | 113 ++++++++++++++++++++++++++
hypervisor/arch/arm/setup.c | 2 -
7 files changed, 251 insertions(+), 3 deletions(-)
create mode 100644 hypervisor/arch/arm/dbg-write-pl011.c
create mode 100644 hypervisor/arch/arm/dbg-write.c
create mode 100644 hypervisor/arch/arm/include/asm/debug.h
create mode 100644 hypervisor/arch/arm/include/asm/platform.h
create mode 100644 hypervisor/arch/arm/include/asm/uart_pl011.h

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 425f221..bb9203c 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -14,4 +14,9 @@ KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))

always := built-in.o

-obj-y := entry.o setup.o lib.o
+obj-y := entry.o dbg-write.o setup.o lib.o
+obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o
+
+# Needed for kconfig
+ccflags-y += -I$(KERNELDIR)/include
+asflags-y += -I$(KERNELDIR)/include
diff --git a/hypervisor/arch/arm/dbg-write-pl011.c b/hypervisor/arch/arm/dbg-write-pl011.c
new file mode 100644
index 0000000..06f386f
--- /dev/null
+++ b/hypervisor/arch/arm/dbg-write-pl011.c
@@ -0,0 +1,24 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/io.h>
+#include <asm/uart_pl011.h>
+
+/* All the helpers are in the header, to make them re-usable by the inmates */
+void uart_chip_init(struct uart_chip *chip)
+{
+ chip->wait = uart_wait;
+ chip->busy = uart_busy;
+ chip->write = uart_write;
+
+ uart_init(chip);
+}
diff --git a/hypervisor/arch/arm/dbg-write.c b/hypervisor/arch/arm/dbg-write.c
new file mode 100644
index 0000000..411c753
--- /dev/null
+++ b/hypervisor/arch/arm/dbg-write.c
@@ -0,0 +1,46 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/debug.h>
+#include <asm/platform.h>
+#include <jailhouse/printk.h>
+#include <jailhouse/processor.h>
+
+static struct uart_chip uart;
+
+void arch_dbg_write_init(void)
+{
+ /* FIXME: parse a device tree */
+ uart.baudrate = 115200;
+ uart.fifo_enabled = true;
+ uart.virt_base = UART_BASE_VIRT;
+ uart.phys_base = UART_BASE_PHYS;
+
+ uart_chip_init(&uart);
+}
+
+void arch_dbg_write(const char *msg)
+{
+ char c;
+
+ while (1) {
+ c = *msg++;
+ if (!c)
+ break;
+
+ uart.wait(&uart);
+ if (panic_in_progress && panic_cpu != phys_processor_id())
+ break;
+ uart.write(&uart, c);
+ uart.busy(&uart);
+ }
+}
diff --git a/hypervisor/arch/arm/include/asm/debug.h b/hypervisor/arch/arm/include/asm/debug.h
new file mode 100644
index 0000000..0df7fdb
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/debug.h
@@ -0,0 +1,35 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef JAILHOUSE_ASM_DEBUG_H_
+#define JAILHOUSE_ASM_DEBUG_H_
+
+#include <asm/types.h>
+
+#ifndef __ASSEMBLY__
+
+/* Defines the bare minimum for debug writes */
+struct uart_chip {
+ void *virt_base;
+ void *phys_base;
+ unsigned int baudrate;
+ bool fifo_enabled;
+
+ void (*wait)(struct uart_chip *);
+ void (*busy)(struct uart_chip *);
+ void (*write)(struct uart_chip *, char c);
+};
+
+void uart_chip_init(struct uart_chip *chip);
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !JAILHOUSE_ASM_DEBUG_H_ */
diff --git a/hypervisor/arch/arm/include/asm/platform.h b/hypervisor/arch/arm/include/asm/platform.h
new file mode 100644
index 0000000..f18dd83
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/platform.h
@@ -0,0 +1,27 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_PLATFORM_H
+#define _JAILHOUSE_ASM_PLATFORM_H
+
+#include <linux/kconfig.h>
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_ARCH_VEXPRESS
+
+#define UART_BASE_PHYS ((void *)0x1c090000)
+#define UART_BASE_VIRT ((void *)0xf8090000)
+
+#endif /* CONFIG_ARCH_VEXPRESS */
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_PLATFORM_H */
diff --git a/hypervisor/arch/arm/include/asm/uart_pl011.h b/hypervisor/arch/arm/include/asm/uart_pl011.h
new file mode 100644
index 0000000..cfa29f3
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/uart_pl011.h
@@ -0,0 +1,113 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_DEBUG_PL011_H
+#define _JAILHOUSE_ASM_DEBUG_PL011_H
+
+#include <asm/debug.h>
+#include <asm/io.h>
+#include <asm/processor.h>
+
+#define UART_CLK 24000000
+
+#define UARTDR 0x00
+#define UARTRSR 0x04
+#define UARTECR 0x04
+#define UARTFR 0x18
+#define UARTILPR 0x20
+#define UARTIBRD 0x24
+#define UARTFBRD 0x28
+#define UARTLCR_H 0x2c
+#define UARTCR 0x30
+#define UARTIFLS 0x34
+#define UARTIMSC 0x38
+#define UARTRIS 0x3c
+#define UARTMIS 0x40
+#define UARTICR 0x44
+#define UARTDMACR 0x48
+
+#define UARTFR_RXFF (1 << 6)
+#define UARTFR_TXFE (1 << 7)
+#define UARTFR_TXFF (1 << 5)
+#define UARTFR_RXFE (1 << 4)
+#define UARTFR_BUSY (1 << 3)
+#define UARTFR_DCD (1 << 2)
+#define UARTFR_DSR (1 << 1)
+#define UARTFR_CTS (1 << 0)
+
+#define UARTCR_CTSEn (1 << 15)
+#define UARTCR_RTSEn (1 << 14)
+#define UARTCR_Out2 (1 << 13)
+#define UARTCR_Out1 (1 << 12)
+#define UARTCR_RTS (1 << 11)
+#define UARTCR_DTR (1 << 10)
+#define UARTCR_RXE (1 << 9)
+#define UARTCR_TXE (1 << 8)
+#define UARTCR_LBE (1 << 7)
+#define UARTCR_SIRLP (1 << 2)
+#define UARTCR_SIREN (1 << 1)
+#define UARTCR_EN (1 << 0)
+
+#define UARTLCR_H_SPS (1 << 7)
+#define UARTLCR_H_WLEN (3 << 5)
+#define UARTLCR_H_FEN (1 << 4)
+#define UARTLCR_H_STP2 (1 << 3)
+#define UARTLCR_H_EPS (1 << 2)
+#define UARTLCR_H_PEN (1 << 1)
+#define UARTLCR_H_BRK (1 << 0)
+
+#ifndef __ASSEMBLY__
+
+static void uart_init(struct uart_chip *chip)
+{
+ /* 115200 8N1 */
+ /* FIXME: Can be improved with an implementation of __aeabi_uidiv */
+ u32 bauddiv = UART_CLK / (16 * 115200);
+ void *base = chip->virt_base;
+
+ writew_relaxed(0, base + UARTCR);
+ while (readb_relaxed(base + UARTFR) & UARTFR_BUSY)
+ cpu_relax();
+
+ writeb_relaxed(UARTLCR_H_WLEN, base + UARTLCR_H);
+ writew_relaxed(bauddiv, base + UARTIBRD);
+ writew_relaxed((UARTCR_EN | UARTCR_TXE | UARTCR_RXE | UARTCR_Out1
+ | UARTCR_Out2), base + UARTCR);
+}
+
+static void uart_wait(struct uart_chip *chip)
+{
+ u32 flags;
+
+ do {
+ flags = readl_relaxed(chip->virt_base + UARTFR);
+ cpu_relax();
+ } while (flags & UARTFR_TXFF); /* FIFO full */
+}
+
+static void uart_busy(struct uart_chip *chip)
+{
+ u32 flags;
+
+ do {
+ flags = readl_relaxed(chip->virt_base + UARTFR);
+ cpu_relax();
+ } while (flags & UARTFR_BUSY);
+}
+
+static void uart_write(struct uart_chip *chip, char c)
+{
+ writel_relaxed(c, chip->virt_base + UARTDR);
+}
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !_JAILHOUSE_ASM_DEBUG_PL011_H */
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 74dc0e6..99dc79c 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -42,7 +42,6 @@ void arch_cpu_restore(struct per_cpu *cpu_data)
#include <jailhouse/control.h>
#include <jailhouse/string.h>
#include <jailhouse/paging.h>
-void arch_dbg_write_init(void) {}
void arch_suspend_cpu(unsigned int cpu_id) {}
void arch_resume_cpu(unsigned int cpu_id) {}
void arch_reset_cpu(unsigned int cpu_id) {}
@@ -59,7 +58,6 @@ int arch_unmap_memory_region(struct cell *cell,
void arch_cell_destroy(struct per_cpu *cpu_data, struct cell *new_cell) {}
void arch_config_commit(struct per_cpu *cpu_data,
struct cell *cell_added_removed) {}
-void arch_dbg_write(const char *msg) {}
void arch_shutdown(void) {}
unsigned long arch_page_map_gphys2phys(struct per_cpu *cpu_data,
unsigned long gphys)
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-08-09 11:44:44 UTC
Permalink
Post by Jean-Philippe Brucker
This patch uses the kernel config to detect which UART is available.
This approach doesn't build for me. linux/kconfig.h is not found,
probably because my kernel was built out of tree.

We have a primitive config.h mechanism in Jailhouse as well (you will
have to create hypervisor/include/jailhouse/config.h manually). Better
use that one, also to avoid pulling in arbitrary code from the Linux
kernel. I'd like to keep the code bases separate for building the
hypervisor core.

Jan
Jean-Philippe Brucker
2014-08-09 16:01:34 UTC
Permalink
Post by Jan Kiszka
Post by Jean-Philippe Brucker
This patch uses the kernel config to detect which UART is available.
This approach doesn't build for me. linux/kconfig.h is not found,
probably because my kernel was built out of tree.
We have a primitive config.h mechanism in Jailhouse as well (you will
have to create hypervisor/include/jailhouse/config.h manually). Better
use that one, also to avoid pulling in arbitrary code from the Linux
kernel. I'd like to keep the code bases separate for building the
hypervisor core.
I used platform.h as a temporary solution, in order to guess the device
addresses from the kconfig, but I always built against the root kernel.

Since there is not much device dependencies, we could indeed define
manually 'CONFIG_ARCH_*' and 'CONFIG_ARM_GIC' in jailhouse/config.h, but
a cleaner solution would parse device trees.

Those variables are also currently used by the makefile to build the
right drivers and avoid dead code, but they could be removed as well,
with an improved probing system. I guess you could pass them in the
CFLAGS to avoid including config.h for the moment.

Thanks,
Jean-Philippe
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:44 UTC
Permalink
Most armv7-compatible toolchains still need an additional flag to
recognise instructions such as ERET or an MSR to banked registers.
This patch allows to includes the virt flag in files that require it.
It also forces the hypervisor image to only use the ARM instruction
set.
Support for Thumb2 hypervisor and kernel will be added later.
Guests should still be able to run in Thumb2, as long as they allow
to be entered in ARM mode.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/Makefile | 4 ++++
hypervisor/arch/arm/Makefile | 1 -
hypervisor/arch/arm/entry.S | 1 +
hypervisor/arch/arm/include/asm/head.h | 24 ++++++++++++++++++++++++
4 files changed, 29 insertions(+), 1 deletion(-)
create mode 100644 hypervisor/arch/arm/include/asm/head.h

diff --git a/hypervisor/Makefile b/hypervisor/Makefile
index 827209f..688d7f0 100644
--- a/hypervisor/Makefile
+++ b/hypervisor/Makefile
@@ -21,6 +21,10 @@ KBUILD_CFLAGS += -mcmodel=kernel
KBUILD_CPPFLAGS += -m64
endif

+ifeq ($(SRCARCH),arm)
+KBUILD_CFLAGS += -marm
+endif
+
ifneq ($(wildcard $(src)/include/jailhouse/config.h),)
KBUILD_CFLAGS += -include $(src)/include/jailhouse/config.h
endif
diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index d5991cd..6ceb061 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -14,5 +14,4 @@ KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))

always := built-in.o

-#obj-y := dbg-write.o entry.o setup.o fault.o control.o mmio.o
obj-y := entry.o setup.o
diff --git a/hypervisor/arch/arm/entry.S b/hypervisor/arch/arm/entry.S
index c374614..25325a0 100644
--- a/hypervisor/arch/arm/entry.S
+++ b/hypervisor/arch/arm/entry.S
@@ -10,6 +10,7 @@
* the COPYING file in the top-level directory.
*/

+#include <asm/head.h>
#include <asm/percpu.h>

/* Entry point for Linux loader module on JAILHOUSE_ENABLE */
diff --git a/hypervisor/arch/arm/include/asm/head.h b/hypervisor/arch/arm/include/asm/head.h
new file mode 100644
index 0000000..aee1ade
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/head.h
@@ -0,0 +1,24 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_HEAD_H
+#define _JAILHOUSE_ASM_HEAD_H_
+
+#ifdef __ASSEMBLY__
+ .arch_extension virt
+ .arm
+ .syntax unified
+#else
+ asm(".arch_extension virt\n");
+#endif
+
+#endif /* !_JAILHOUSE_ASM_HEAD_H */
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:48 UTC
Permalink
This patch adds a few simple macros that allow the C code to use
specific ARM instructions.
The memory_barrier helper is only used to commit the changes of cpu_init
on the last core before allowing the others to return to EL1.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/include/asm/processor.h | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index ef76687..25bab65 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -20,12 +20,22 @@
struct registers {
};

+#define dmb(domain) asm volatile("dmb " #domain "\n" ::: "memory")
+#define dsb(domain) asm volatile("dsb " #domain "\n" ::: "memory")
+#define isb() asm volatile("isb\n")
+
+#define wfe() asm volatile("wfe\n")
+#define wfi() asm volatile("wfi\n")
+#define sev() asm volatile("sev\n")
+
static inline void cpu_relax(void)
{
+ asm volatile("" : : : "memory");
}

static inline void memory_barrier(void)
{
+ dmb(ish);
}

#endif /* !__ASSEMBLY__ */
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:00 UTC
Permalink
There is no need for any code in init_late for the moment. The x86 side
uses it to initialise vtd and pci for the root cell.
This patch allows to run a complete hypervisor setup from the driver and
return to the kernel normally.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/setup.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 08761d3..ef18a7d 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -80,7 +80,7 @@ int arch_cpu_init(struct per_cpu *cpu_data)

int arch_init_late(void)
{
- return -ENOSYS;
+ return 0;
}

void arch_cpu_activate_vmm(struct per_cpu *cpu_data)
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:51 UTC
Permalink
Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/include/asm/bitops.h | 90 ++++++++++++++++++++++++++----
1 file changed, 80 insertions(+), 10 deletions(-)

diff --git a/hypervisor/arch/arm/include/asm/bitops.h b/hypervisor/arch/arm/include/asm/bitops.h
index fd3d785..de63d39 100644
--- a/hypervisor/arch/arm/include/asm/bitops.h
+++ b/hypervisor/arch/arm/include/asm/bitops.h
@@ -15,41 +15,111 @@

#include <asm/types.h>

+#ifndef __ASSEMBLY__
+
+#define BITOPT_ALIGN(bits, addr) \
+ do { \
+ (addr) = (unsigned long *)((u32)(addr) & ~0x3) \
+ + (bits) / BITS_PER_LONG; \
+ (bits) %= BITS_PER_LONG; \
+ } while (0)
+
+/* Load the cacheline in exclusive state */
+#define PRELOAD(addr) \
+ asm volatile (".arch_extension mp\n" \
+ "pldw %0\n" \
+ : "+Qo" (*(volatile unsigned long *)addr));
+
static inline __attribute__((always_inline)) void
clear_bit(int nr, volatile unsigned long *addr)
{
+ unsigned long ret, val;
+
+ BITOPT_ALIGN(nr, addr);
+
+ PRELOAD(addr);
+ do {
+ asm volatile (
+ "ldrex %1, %2\n"
+ "bic %1, %3\n"
+ "strex %0, %1, %2\n"
+ : "=r" (ret), "=r" (val),
+ /* Declare the clobbering of this address to the compiler */
+ "+Qo" (*(volatile unsigned long *)addr)
+ : "r" (1 << nr));
+ } while (ret);
}

static inline __attribute__((always_inline)) void
set_bit(unsigned int nr, volatile unsigned long *addr)
{
+ unsigned long ret, val;
+
+ BITOPT_ALIGN(nr, addr);
+
+ PRELOAD(addr);
+ do {
+ asm volatile (
+ "ldrex %1, %2\n"
+ "orr %1, %3\n"
+ "strex %0, %1, %2\n"
+ : "=r" (ret), "=r" (val),
+ "+Qo" (*(volatile unsigned long *)addr)
+ : "r" (1 << nr));
+ } while (ret);
}

static inline __attribute__((always_inline)) int
-constant_test_bit(unsigned int nr, const volatile unsigned long *addr)
+test_bit(unsigned int nr, const volatile unsigned long *addr)
{
return ((1UL << (nr % BITS_PER_LONG)) &
(addr[nr / BITS_PER_LONG])) != 0;
}

-static inline int variable_test_bit(int nr, volatile const unsigned long *addr)
+static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
{
- return 0;
+ unsigned long ret, val, test;
+
+ BITOPT_ALIGN(nr, addr);
+
+ PRELOAD(addr);
+ do {
+ asm volatile (
+ "ldrex %1, %3\n"
+ "ands %2, %1, %4\n"
+ "it eq\n"
+ "orreq %1, %4\n"
+ "strex %0, %1, %3\n"
+ : "=r" (ret), "=r" (val), "=r" (test),
+ "+Qo" (*(volatile unsigned long *)addr)
+ : "r" (1 << nr));
+ } while (ret);
+
+ return !!(test);
}

-#define test_bit(nr, addr) \
- (__builtin_constant_p((nr)) \
- ? constant_test_bit((nr), (addr)) \
- : variable_test_bit((nr), (addr)))

-static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
+/* Count leading zeroes */
+static inline unsigned long clz(unsigned long word)
+{
+ unsigned long val;
+ asm volatile ("clz %0, %1\n" : "=r" (val) : "r" (word));
+ return val;
+}
+
+/* Returns the position of the least significant 1, MSB=31, LSB=0*/
+static inline unsigned long ffsl(unsigned long word)
{
- return 0;
+ if (!word)
+ return 0;
+ asm volatile ("rbit %0, %0\n" : "+r" (word));
+ return clz(word);
}

static inline unsigned long ffzl(unsigned long word)
{
- return 0;
+ return ffsl(~word);
}

+#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_BITOPS_H */
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:02 UTC
Permalink
Assuming there is a GIC distributor at address GICD_BASE, this patch
checks its version and call the gic init function. Linux's kconfig header
is used to guess the base address of the distributor.
Ideally, a device tree would be passed to the hypervisor in the root
cell's config, allowing to remove all constant base addresses.

The patch also assumes that most of the GIC has been setup by Linux prior
to the hypervisor installation, and only initialises the vGIC.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 1 +
hypervisor/arch/arm/gic-v3.c | 159 +++++++++++++++++
hypervisor/arch/arm/include/asm/gic_common.h | 43 +++++
hypervisor/arch/arm/include/asm/gic_v3.h | 248 ++++++++++++++++++++++++++
hypervisor/arch/arm/include/asm/io.h | 3 +
hypervisor/arch/arm/include/asm/percpu.h | 4 +-
hypervisor/arch/arm/include/asm/platform.h | 18 +-
hypervisor/arch/arm/irqchip.c | 51 +++++-
8 files changed, 522 insertions(+), 5 deletions(-)
create mode 100644 hypervisor/arch/arm/gic-v3.c
create mode 100644 hypervisor/arch/arm/include/asm/gic_common.h
create mode 100644 hypervisor/arch/arm/include/asm/gic_v3.h

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 3932f85..78890ef 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -17,6 +17,7 @@ always := built-in.o
obj-y := entry.o dbg-write.o exception.o setup.o lib.o traps.o
obj-y += paging.o mmu_hyp.o mmu_cell.o
obj-y += irqchip.o
+obj-$(CONFIG_ARM_GIC_V3) += gic-v3.o
obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o

# Needed for kconfig
diff --git a/hypervisor/arch/arm/gic-v3.c b/hypervisor/arch/arm/gic-v3.c
new file mode 100644
index 0000000..b8ffaa8
--- /dev/null
+++ b/hypervisor/arch/arm/gic-v3.c
@@ -0,0 +1,159 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/types.h>
+#include <asm/io.h>
+#include <asm/irqchip.h>
+#include <asm/gic_common.h>
+#include <asm/platform.h>
+#include <asm/setup.h>
+#include <jailhouse/printk.h>
+#include <jailhouse/processor.h>
+
+/*
+ * This implementation assumes that the kernel driver already initialised most
+ * of the GIC.
+ * There is almost no instruction barrier, since IRQs are always disabled in the
+ * hyp, and ERET serves as the context synchronization event.
+ */
+
+static unsigned int gic_num_lr;
+
+static void *gicr_base;
+static unsigned int gicr_size;
+
+static int gic_init(void)
+{
+ int err;
+
+ /* FIXME: parse a dt */
+ gicr_base = GICR_BASE;
+ gicr_size = GICR_SIZE;
+
+ /* Let the per-cpu code access the redistributors */
+ err = arch_map_device(gicr_base, gicr_base, gicr_size);
+
+ return err;
+}
+
+static int gic_cpu_init(struct per_cpu *cpu_data)
+{
+ u64 typer;
+ u32 pidr;
+ u32 gic_version;
+ u32 cell_icc_ctlr, cell_icc_pmr, cell_icc_igrpen1;
+ u32 ich_vtr;
+ u32 ich_vmcr;
+ void *redist_base = gicr_base;
+
+ /* Find redistributor */
+ do {
+ pidr = readl_relaxed(redist_base + GICR_PIDR2);
+ gic_version = GICR_PIDR2_ARCH(pidr);
+ if (gic_version != 3 && gic_version != 4)
+ break;
+
+ typer = readq_relaxed(redist_base + GICR_TYPER);
+ if ((typer >> 32) == cpu_data->cpu_id) {
+ cpu_data->gicr_base = redist_base;
+ break;
+ }
+
+ redist_base += 0x20000;
+ if (gic_version == 4)
+ redist_base += 0x20000;
+ } while (!(typer & GICR_TYPER_Last));
+
+ if (cpu_data->gicr_base == 0) {
+ printk("GIC: No redist found for CPU%d\n", cpu_data->cpu_id);
+ return -ENODEV;
+ }
+
+ /* Ensure all IPIs are enabled */
+ writel_relaxed(0x0000ffff, redist_base + GICR_SGI_BASE + GICR_ISENABLER);
+
+ /*
+ * Set EOIMode to 1
+ * This allow to drop the priority of level-triggered interrupts without
+ * deactivating them, and thus ensure that they won't be immediately
+ * re-triggered. (e.g. timer)
+ * They can then be injected into the guest using the LR.HW bit, and
+ * will be deactivated once the guest does an EOI after handling the
+ * interrupt source.
+ */
+ arm_read_sysreg(ICC_CTLR_EL1, cell_icc_ctlr);
+ arm_write_sysreg(ICC_CTLR_EL1, ICC_CTLR_EOImode);
+
+ arm_read_sysreg(ICC_PMR_EL1, cell_icc_pmr);
+ arm_write_sysreg(ICC_PMR_EL1, ICC_PMR_DEFAULT);
+
+ arm_read_sysreg(ICC_IGRPEN1_EL1, cell_icc_igrpen1);
+ arm_write_sysreg(ICC_IGRPEN1_EL1, ICC_IGRPEN1_EN);
+
+ arm_read_sysreg(ICH_VTR_EL2, ich_vtr);
+ gic_num_lr = (ich_vtr & 0xf) + 1;
+
+ ich_vmcr = (cell_icc_pmr & ICC_PMR_MASK) << ICH_VMCR_VPMR_SHIFT;
+ if (cell_icc_igrpen1 & ICC_IGRPEN1_EN)
+ ich_vmcr |= ICH_VMCR_VENG1;
+ if (cell_icc_ctlr & ICC_CTLR_EOImode)
+ ich_vmcr |= ICH_VMCR_VEOIM;
+ arm_write_sysreg(ICH_VMCR_EL2, ich_vmcr);
+
+ /* After this, the cells access the virtual interface of the GIC. */
+ arm_write_sysreg(ICH_HCR_EL2, ICH_HCR_EN);
+
+ return 0;
+}
+
+static int gic_send_sgi(struct sgi *sgi)
+{
+ u64 val;
+ u16 targets = sgi->targets;
+
+ if (!is_sgi(sgi->id))
+ return -EINVAL;
+
+ if (sgi->routing_mode == 2)
+ targets = 1 << phys_processor_id();
+
+ val = (u64)sgi->aff3 << ICC_SGIR_AFF3_SHIFT
+ | (u64)sgi->aff2 << ICC_SGIR_AFF2_SHIFT
+ | sgi->aff1 << ICC_SGIR_AFF1_SHIFT
+ | (targets & ICC_SGIR_TARGET_MASK)
+ | (sgi->id & 0xf) << ICC_SGIR_IRQN_SHIFT;
+
+ if (sgi->routing_mode == 1)
+ val |= ICC_SGIR_ROUTING_BIT;
+
+ /*
+ * Ensure the targets see our modifications to their per-cpu
+ * structures.
+ */
+ dsb(ish);
+
+ arm_write_sysreg(ICC_SGI1R_EL1, val);
+ isb();
+
+ return 0;
+}
+
+static void gic_handle_irq(struct per_cpu *cpu_data)
+{
+}
+
+struct irqchip_ops gic_irqchip = {
+ .init = gic_init,
+ .cpu_init = gic_cpu_init,
+ .send_sgi = gic_send_sgi,
+ .handle_irq = gic_handle_irq,
+};
diff --git a/hypervisor/arch/arm/include/asm/gic_common.h b/hypervisor/arch/arm/include/asm/gic_common.h
new file mode 100644
index 0000000..d2ff6ac
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/gic_common.h
@@ -0,0 +1,43 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_GIC_COMMON_H
+#define _JAILHOUSE_ASM_GIC_COMMON_H
+
+#include <asm/types.h>
+
+#define GICD_CTLR 0x0000
+#define GICD_TYPER 0x0004
+#define GICD_IIDR 0x0008
+#define GICD_IGROUPR 0x0080
+#define GICD_ISENABLER 0x0100
+#define GICD_ICENABLER 0x0180
+#define GICD_ISPENDR 0x0200
+#define GICD_ICPENDR 0x0280
+#define GICD_ISACTIVER 0x0300
+#define GICD_ICACTIVER 0x0380
+#define GICD_IPRIORITYR 0x0400
+#define GICD_ITARGETSR 0x0800
+#define GICD_ICFGR 0x0c00
+#define GICD_NSACR 0x0e00
+#define GICD_SGIR 0x0f00
+#define GICD_CPENDSGIR 0x0f10
+#define GICD_SPENDSGIR 0x0f20
+#define GICD_IROUTER 0x6000
+
+#define GICD_PIDR2_ARCH(pidr) (((pidr) & 0xf0) >> 4)
+
+#define is_sgi(irqn) ((u32)(irqn) < 16)
+#define is_ppi(irqn) ((irqn) > 15 && (irqn) < 32)
+#define is_spi(irqn) ((irqn) > 31 && (irqn) < 1020)
+
+#endif /* !_JAILHOUSE_ASM_GIC_COMMON_H */
diff --git a/hypervisor/arch/arm/include/asm/gic_v3.h b/hypervisor/arch/arm/include/asm/gic_v3.h
new file mode 100644
index 0000000..6768e7b
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/gic_v3.h
@@ -0,0 +1,248 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_GIC_V3_H
+#define _JAILHOUSE_ASM_GIC_V3_H
+
+#include <asm/sysregs.h>
+
+#define GICD_CIDR0 0xfff0
+#define GICD_CIDR1 0xfff4
+#define GICD_CIDR2 0xfff8
+#define GICD_CIDR3 0xfffc
+
+#define GICD_PIDR0 0xffe0
+#define GICD_PIDR1 0xffe4
+#define GICD_PIDR2 0xffe8
+#define GICD_PIDR3 0xffec
+#define GICD_PIDR4 0xffd0
+#define GICD_PIDR5 0xffd4
+#define GICD_PIDR6 0xffd8
+#define GICD_PIDR7 0xffdc
+
+#define GICR_CTLR GICD_CTLR
+#define GICR_TYPER 0x0008
+#define GICR_WAKER 0x0014
+#define GICR_CIDR0 GICD_CIDR0
+#define GICR_CIDR1 GICD_CIDR1
+#define GICR_CIDR2 GICD_CIDR2
+#define GICR_CIDR3 GICD_CIDR3
+#define GICR_PIDR0 GICD_PIDR0
+#define GICR_PIDR1 GICD_PIDR1
+#define GICR_PIDR2 GICD_PIDR2
+#define GICR_PIDR3 GICD_PIDR3
+#define GICR_PIDR4 GICD_PIDR4
+#define GICR_PIDR5 GICD_PIDR5
+#define GICR_PIDR6 GICD_PIDR6
+#define GICR_PIDR7 GICD_PIDR7
+
+#define GICR_SGI_BASE 0x10000
+#define GICR_IGROUPR GICD_IGROUPR
+#define GICR_ISENABLER GICD_ISENABLER
+#define GICR_ICENABLER GICD_ICENABLER
+#define GICR_ISACTIVER GICD_ISACTIVER
+#define GICR_ICACTIVER GICD_ICACTIVER
+#define GICR_IPRIORITY GICD_IPRIORITY
+
+#define GICR_TYPER_Last (1 << 4)
+#define GICR_PIDR2_ARCH GICD_PIDR2_ARCH
+
+#define ICC_IAR1_EL1 SYSREG_32(0, c12, c12, 0)
+#define ICC_EOIR1_EL1 SYSREG_32(0, c12, c12, 1)
+#define ICC_HPPIR1_EL1 SYSREG_32(0, c12, c12, 2)
+#define ICC_BPR1_EL1 SYSREG_32(0, c12, c12, 3)
+#define ICC_DIR_EL1 SYSREG_32(0, c12, c11, 1)
+#define ICC_PMR_EL1 SYSREG_32(0, c4, c6, 0)
+#define ICC_RPR_EL1 SYSREG_32(0, c12, c11, 3)
+#define ICC_CTLR_EL1 SYSREG_32(0, c12, c12, 4)
+#define ICC_SRE_EL1 SYSREG_32(0, c12, c12, 5)
+#define ICC_SRE_EL2 SYSREG_32(4, c12, c9, 5)
+#define ICC_IGRPEN1_EL1 SYSREG_32(0, c12, c12, 7)
+#define ICC_SGI1R_EL1 SYSREG_64(0, c12)
+
+#define ICH_HCR_EL2 SYSREG_32(4, c12, c11, 0)
+#define ICH_VTR_EL2 SYSREG_32(4, c12, c11, 1)
+#define ICH_MISR_EL2 SYSREG_32(4, c12, c11, 2)
+#define ICH_EISR_EL2 SYSREG_32(4, c12, c11, 3)
+#define ICH_ELSR_EL2 SYSREG_32(4, c12, c11, 5)
+#define ICH_VMCR_EL2 SYSREG_32(4, c12, c11, 7)
+
+/* Different on AArch32 and AArch64... */
+#define __ICH_LR0(x) SYSREG_32(4, c12, c12, x)
+#define __ICH_LR8(x) SYSREG_32(4, c12, c13, x)
+#define __ICH_LRC0(x) SYSREG_32(4, c12, c14, x)
+#define __ICH_LRC8(x) SYSREG_32(4, c12, c15, x)
+
+#define ICH_LR0 __ICH_LR0(0)
+#define ICH_LR1 __ICH_LR0(1)
+#define ICH_LR2 __ICH_LR0(2)
+#define ICH_LR3 __ICH_LR0(3)
+#define ICH_LR4 __ICH_LR0(4)
+#define ICH_LR5 __ICH_LR0(5)
+#define ICH_LR6 __ICH_LR0(6)
+#define ICH_LR7 __ICH_LR0(7)
+#define ICH_LR8 __ICH_LR8(0)
+#define ICH_LR9 __ICH_LR8(1)
+#define ICH_LR10 __ICH_LR8(2)
+#define ICH_LR11 __ICH_LR8(3)
+#define ICH_LR12 __ICH_LR8(4)
+#define ICH_LR13 __ICH_LR8(5)
+#define ICH_LR14 __ICH_LR8(6)
+#define ICH_LR15 __ICH_LR8(7)
+#define ICH_LRC0 __ICH_LRC0(0)
+#define ICH_LRC1 __ICH_LRC0(1)
+#define ICH_LRC2 __ICH_LRC0(2)
+#define ICH_LRC3 __ICH_LRC0(3)
+#define ICH_LRC4 __ICH_LRC0(4)
+#define ICH_LRC5 __ICH_LRC0(5)
+#define ICH_LRC6 __ICH_LRC0(6)
+#define ICH_LRC7 __ICH_LRC0(7)
+#define ICH_LRC8 __ICH_LRC8(0)
+#define ICH_LRC9 __ICH_LRC8(1)
+#define ICH_LRC10 __ICH_LRC8(2)
+#define ICH_LRC11 __ICH_LRC8(3)
+#define ICH_LRC12 __ICH_LRC8(4)
+#define ICH_LRC13 __ICH_LRC8(5)
+#define ICH_LRC14 __ICH_LRC8(6)
+#define ICH_LRC15 __ICH_LRC8(7)
+
+#define ICC_CTLR_EOImode 0x2
+#define ICC_PMR_MASK 0xff
+#define ICC_PMR_DEFAULT 0xf0
+#define ICC_IGRPEN1_EN 0x1
+
+#define ICC_SGIR_AFF3_SHIFT 48
+#define ICC_SGIR_AFF2_SHIFT 32
+#define ICC_SGIR_AFF1_SHIFT 16
+#define ICC_SGIR_TARGET_MASK 0xffff
+#define ICC_SGIR_IRQN_SHIFT 24
+#define ICC_SGIR_ROUTING_BIT (1ULL << 40)
+
+#define ICH_HCR_EN (1 << 0)
+#define ICH_HCR_UIE (1 << 1)
+#define ICH_HCR_LRENPIE (1 << 2)
+#define ICH_HCR_NPIE (1 << 3)
+#define ICH_HCR_VGRP0EIE (1 << 4)
+#define ICH_HCR_VGRP0DIE (1 << 5)
+#define ICH_HCR_VGRP1EIE (1 << 6)
+#define ICH_HCR_VGRP1DIE (1 << 7)
+#define ICH_HCR_VARE (1 << 9)
+#define ICH_HCR_TC (1 << 10)
+#define ICH_HCR_TALL0 (1 << 11)
+#define ICH_HCR_TALL1 (1 << 12)
+#define ICH_HCR_TSEI (1 << 13)
+#define ICH_HCR_EOICount (0x1f << 27)
+
+#define ICH_MISR_EOI (1 << 0)
+#define ICH_MISR_U (1 << 1)
+#define ICH_MISR_LRENP (1 << 2)
+#define ICH_MISR_NP (1 << 3)
+#define ICH_MISR_VGRP0E (1 << 4)
+#define ICH_MISR_VGRP0D (1 << 5)
+#define ICH_MISR_VGRP1E (1 << 6)
+#define ICH_MISR_VGRP1D (1 << 7)
+
+#define ICH_VMCR_VENG0 (1 << 0)
+#define ICH_VMCR_VENG1 (1 << 1)
+#define ICH_VMCR_VACKCTL (1 << 2)
+#define ICH_VMCR_VFIQEN (1 << 3)
+#define ICH_VMCR_VCBPR (1 << 4)
+#define ICH_VMCR_VEOIM (1 << 9)
+#define ICH_VMCR_VBPR1_SHIFT 18
+#define ICH_VMCR_VBPR0_SHIFT 21
+#define ICH_VMCR_VPMR_SHIFT 24
+
+/* List registers upper bits */
+#define ICH_LR_INVALID (0x0ULL << 62)
+#define ICH_LR_PENDING (0x1ULL << 62)
+#define ICH_LR_ACTIVE (0x2ULL << 62)
+#define ICH_LR_PENDACTIVE (0x3ULL << 62)
+#define ICH_LR_HW_BIT (0x1ULL << 61)
+#define ICH_LR_GROUP_BIT (0x1ULL << 60)
+#define ICH_LR_PRIORITY_SHIFT 48
+#define ICH_LR_SGI_EOI (0x1ULL << 41)
+#define ICH_LR_PHYS_ID_SHIFT 32
+
+#ifndef __ASSEMBLY__
+
+#include <asm/types.h>
+
+static inline u64 gic_read_lr(unsigned int n)
+{
+ u32 lr, lrc;
+
+ switch (n) {
+#define __READ_LR(n) \
+ case n: \
+ arm_read_sysreg(ICH_LR##n, lr); \
+ arm_read_sysreg(ICH_LRC##n, lrc); \
+ break;
+
+ __READ_LR(0)
+ __READ_LR(1)
+ __READ_LR(2)
+ __READ_LR(3)
+ __READ_LR(4)
+ __READ_LR(5)
+ __READ_LR(6)
+ __READ_LR(7)
+ __READ_LR(8)
+ __READ_LR(9)
+ __READ_LR(10)
+ __READ_LR(11)
+ __READ_LR(12)
+ __READ_LR(13)
+ __READ_LR(14)
+ __READ_LR(15)
+#undef __READ_LR
+
+ default:
+ return (u64)(-1);
+ }
+
+ return (u64)lrc << 32 | lr;
+}
+
+static inline void gic_write_lr(unsigned int n, u64 val)
+{
+ u32 lr = (u32)val;
+ u32 lrc = val >> 32;
+
+ switch (n) {
+#define __WRITE_LR(n) \
+ case n: \
+ arm_write_sysreg(ICH_LR##n, lr); \
+ arm_write_sysreg(ICH_LRC##n, lrc); \
+ break;
+
+ __WRITE_LR(0)
+ __WRITE_LR(1)
+ __WRITE_LR(2)
+ __WRITE_LR(3)
+ __WRITE_LR(4)
+ __WRITE_LR(5)
+ __WRITE_LR(6)
+ __WRITE_LR(7)
+ __WRITE_LR(8)
+ __WRITE_LR(9)
+ __WRITE_LR(10)
+ __WRITE_LR(11)
+ __WRITE_LR(12)
+ __WRITE_LR(13)
+ __WRITE_LR(14)
+ __WRITE_LR(15)
+#undef __WRITE_LR
+ }
+}
+
+#endif /* __ASSEMBLY__ */
+#endif /* _JAILHOUSE_ASM_GIC_V3_H */
diff --git a/hypervisor/arch/arm/include/asm/io.h b/hypervisor/arch/arm/include/asm/io.h
index 10705f5..21f085b 100644
--- a/hypervisor/arch/arm/include/asm/io.h
+++ b/hypervisor/arch/arm/include/asm/io.h
@@ -15,6 +15,9 @@

#include <asm/types.h>

+/* AMBA's biosfood */
+#define AMBA_DEVICE 0xb105f00d
+
#ifndef __ASSEMBLY__

static inline void writeb_relaxed(u8 val, volatile void *addr)
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index b361116..e224254 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -36,7 +36,9 @@ struct per_cpu {
unsigned long linux_reg[NUM_ENTRY_REGS];

unsigned int cpu_id;
-// u32 apic_id;
+ /* Only GICv3: redistributor base */
+ void *gicr_base;
+
struct cell *cell;

u32 stats[JAILHOUSE_NUM_CPU_STATS];
diff --git a/hypervisor/arch/arm/include/asm/platform.h b/hypervisor/arch/arm/include/asm/platform.h
index f18dd83..a69d744 100644
--- a/hypervisor/arch/arm/include/asm/platform.h
+++ b/hypervisor/arch/arm/include/asm/platform.h
@@ -15,12 +15,26 @@

#include <linux/kconfig.h>

+/*
+ * All those things are defined in the device tree. This header *must*
+ * disappear. The GIC includes will need to be sanitized in order to avoid ID
+ * naming conflicts.
+ */
#ifndef __ASSEMBLY__

#ifdef CONFIG_ARCH_VEXPRESS

-#define UART_BASE_PHYS ((void *)0x1c090000)
-#define UART_BASE_VIRT ((void *)0xf8090000)
+# define UART_BASE_PHYS ((void *)0x1c090000)
+# define UART_BASE_VIRT ((void *)0xf8090000)
+
+# ifdef CONFIG_ARM_GIC_V3
+# define GICD_BASE ((void *)0x2f000000)
+# define GICD_SIZE 0x10000
+# define GICR_BASE ((void *)0x2f100000)
+# define GICR_SIZE 0x100000
+
+# include <asm/gic_v3.h>
+# endif /* GIC */

#endif /* CONFIG_ARCH_VEXPRESS */
#endif /* !__ASSEMBLY__ */
diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
index 67eef60..8fb4415 100644
--- a/hypervisor/arch/arm/irqchip.c
+++ b/hypervisor/arch/arm/irqchip.c
@@ -10,13 +10,20 @@
* the COPYING file in the top-level directory.
*/

+#include <asm/gic_common.h>
#include <asm/irqchip.h>
+#include <asm/io.h>
+#include <asm/platform.h>
+#include <asm/setup.h>
#include <asm/sysregs.h>
#include <jailhouse/entry.h>
#include <jailhouse/paging.h>
#include <jailhouse/printk.h>
#include <jailhouse/string.h>

+void *gicd_base;
+unsigned long gicd_size;
+
/*
* The init function must be called after the MMU setup, and whilst in the
* per-cpu setup, which means that a bool must be set by the master CPU
@@ -42,14 +49,54 @@ int irqchip_cpu_init(struct per_cpu *cpu_data)
return 0;
}

+/* Only the GIC is implemented */
+extern struct irqchip_ops gic_irqchip;
+
int irqchip_init(void)
{
+ int i, err;
+ u32 pidr2, cidr;
+ u32 dev_id = 0;
+
/* Only executed on master CPU */
if (irqchip_is_init)
return 0;

- memset(&irqchip, 0, sizeof(irqchip));
- irqchip_is_init = true;
+ /* FIXME: parse device tree */
+ gicd_base = GICD_BASE;
+ gicd_size = GICD_SIZE;
+
+ if ((err = arch_map_device(gicd_base, gicd_base, gicd_size)) != 0)
+ return err;
+
+ for (i = 3; i >= 0; i--) {
+ cidr = readl_relaxed(gicd_base + GICD_CIDR0 + i * 4);
+ dev_id |= cidr << i * 8;
+ }
+ if (dev_id != AMBA_DEVICE)
+ goto err_no_distributor;
+
+ /* Probe the GIC version */
+ pidr2 = readl_relaxed(gicd_base + GICD_PIDR2);
+ switch (GICD_PIDR2_ARCH(pidr2)) {
+ case 0x2:
+ break;
+ case 0x3:
+ case 0x4:
+ memcpy(&irqchip, &gic_irqchip, sizeof(struct irqchip_ops));
+ break;
+ }
+
+ if (irqchip.init) {
+ err = irqchip.init();
+ irqchip_is_init = true;
+
+ return err;
+ }
+
+err_no_distributor:
+ printk("GIC: no distributor found\n");
+ arch_unmap_device(gicd_base, gicd_size);

return -ENODEV;
}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:56 UTC
Permalink
This patch enables the EL2 stage-1 MMU, after the core initialisation of
all the paging structures needed by the hypervisor.
The arm backend also needs to map the needed devices for MMIOs. In order
to stay compatible with the linux ioremaps (which is quite dodgy, cf.
ecfa8d1a), the UART is still accessed through high memory, but the GIC
should be accessed at its real address.

Some temporary mappings allow the mmu setup code to run at its physical
address while enabling the translations. Given the current hypervisor
configuration, there shouldn't be any conflict with existing mappings.
Once the PE runs at EL2, the HTTBR is installed, setup_mmu jumps back to
the virtual addresses, and the identity mappings are deleted.

This patch attempts to make most of the process 64bit-compatible. Only a
small bit of assembly is needed, which calls phys2hvirt and hvirt2phys
to translate the lr and sp addresses. Since these functions consist of
simple additions, they are currently harmless. This code would blow up
if they needed to dereference some pointers one day, but it should be
safe for the time being.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/Makefile | 1 +
hypervisor/arch/arm/include/asm/processor.h | 31 +++++
hypervisor/arch/arm/include/asm/sections.lds | 7 +
hypervisor/arch/arm/include/asm/setup.h | 5 +-
hypervisor/arch/arm/include/asm/setup_mmu.h | 20 +++
hypervisor/arch/arm/include/asm/sysregs.h | 37 +++++
hypervisor/arch/arm/mmu_hyp.c | 193 ++++++++++++++++++++++++++
hypervisor/arch/arm/setup.c | 4 +-
8 files changed, 294 insertions(+), 4 deletions(-)
create mode 100644 hypervisor/arch/arm/include/asm/sections.lds

diff --git a/hypervisor/Makefile b/hypervisor/Makefile
index 688d7f0..00fbf3d 100644
--- a/hypervisor/Makefile
+++ b/hypervisor/Makefile
@@ -23,6 +23,7 @@ endif

ifeq ($(SRCARCH),arm)
KBUILD_CFLAGS += -marm
+KBUILD_CPPFLAGS += -DARCH_LINK
endif

ifneq ($(wildcard $(src)/include/jailhouse/config.h),)
diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 61ff3f2..e33550f 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -13,6 +13,8 @@
#ifndef _JAILHOUSE_ASM_PROCESSOR_H
#define _JAILHOUSE_ASM_PROCESSOR_H

+#include <jailhouse/utils.h>
+
#define PSR_MODE_MASK 0xf
#define PSR_USR_MODE 0x0
#define PSR_FIQ_MODE 0x1
@@ -32,6 +34,35 @@

#define MPIDR_CPUID_MASK 0x00ffffff

+#define SCTLR_M_BIT (1 << 0)
+#define SCTLR_A_BIT (1 << 1)
+#define SCTLR_C_BIT (1 << 2)
+#define SCTLR_CP15B_BIT (1 << 5)
+#define SCTLR_ITD_BIT (1 << 7)
+#define SCTLR_SED_BIT (1 << 8)
+#define SCTLR_I_BIT (1 << 12)
+#define SCTLR_V_BIT (1 << 13)
+#define SCTLR_nTWI (1 << 16)
+#define SCTLR_nTWE (1 << 18)
+#define SCTLR_WXN_BIT (1 << 19)
+#define SCTLR_UWXN_BIT (1 << 20)
+#define SCTLR_FI_BIT (1 << 21)
+#define SCTLR_EE_BIT (1 << 25)
+#define SCTLR_TRE_BIT (1 << 28)
+#define SCTLR_AFE_BIT (1 << 29)
+#define SCTLR_TE_BIT (1 << 30)
+
+#define PAR_F_BIT 0x1
+#define PAR_FST_SHIFT 1
+#define PAR_FST_MASK 0x3f
+#define PAR_SHA_SHIFT 7
+#define PAR_SHA_MASK 0x3
+#define PAR_NS_BIT (0x1 << 9)
+#define PAR_LPAE_BIT (0x1 << 11)
+#define PAR_PA_MASK BIT_MASK(39, 12)
+#define PAR_ATTR_SHIFT 56
+#define PAR_ATTR_MASK 0xff
+
#ifndef __ASSEMBLY__

struct registers {
diff --git a/hypervisor/arch/arm/include/asm/sections.lds b/hypervisor/arch/arm/include/asm/sections.lds
new file mode 100644
index 0000000..0251e83
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/sections.lds
@@ -0,0 +1,7 @@
+
+. = ALIGN(0x1000);
+.trampoline : {
+ trampoline_start = .;
+ *(.trampoline)
+ trampoline_end = .;
+}
diff --git a/hypervisor/arch/arm/include/asm/setup.h b/hypervisor/arch/arm/include/asm/setup.h
index e73214c..ca9acaf 100644
--- a/hypervisor/arch/arm/include/asm/setup.h
+++ b/hypervisor/arch/arm/include/asm/setup.h
@@ -48,9 +48,8 @@ cpu_return_el1(struct per_cpu *cpu_data)
}

int switch_exception_level(struct per_cpu *cpu_data);
-inline int arch_map_device(unsigned long paddr, unsigned long vaddr,
- unsigned long size);
-inline int arch_unmap_device(unsigned long addr, unsigned long size);
+inline int arch_map_device(void *paddr, void *vaddr, unsigned long size);
+inline int arch_unmap_device(void *addr, unsigned long size);

#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_SETUP_H */
diff --git a/hypervisor/arch/arm/include/asm/setup_mmu.h b/hypervisor/arch/arm/include/asm/setup_mmu.h
index 758f516..7b6e8bb 100644
--- a/hypervisor/arch/arm/include/asm/setup_mmu.h
+++ b/hypervisor/arch/arm/include/asm/setup_mmu.h
@@ -54,5 +54,25 @@ cpu_switch_el2(unsigned long phys_bootstrap, virt2phys_t virt2phys)
: "cc", "memory", "r0", "r1", "r2", "r3");
}

+static inline void __attribute__((always_inline))
+cpu_switch_phys2virt(phys2virt_t phys2virt)
+{
+ /* phys2virt is allowed to touch the stack */
+ asm volatile(
+ "mov r0, lr\n"
+ "blx %0\n"
+ /* Save virt_lr */
+ "push {r0}\n"
+ /* Translate phys_sp */
+ "mov r0, sp\n"
+ "blx %0\n"
+ /* Jump back to virtual addresses */
+ "mov sp, r0\n"
+ "pop {pc}\n"
+ :
+ : "r" (phys2virt)
+ : "cc", "r0", "r1", "r2", "r3", "lr", "sp");
+}
+
#endif /* !__ASSEMBLY__ */
#endif /* _JAILHOUSE_ASM_SETUP_MMU_H */
diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index b27375f..261d934 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -30,10 +30,47 @@
* (Use the AArch64 names to ease the compatibility work)
*/
#define MPIDR_EL1 SYSREG_32(0, c0, c0, 5)
+#define SCTLR_EL2 SYSREG_32(4, c1, c0, 0)
#define TPIDR_EL2 SYSREG_32(4, c13, c0, 2)
+#define TTBR0_EL2 SYSREG_64(4, c2)
+#define TCR_EL2 SYSREG_32(4, c2, c0, 2)
+#define VTTBR_EL2 SYSREG_64(6, c2)
+#define VTCR_EL2 SYSREG_32(4, c2, c1, 2)

+#define PAR_EL1 SYSREG_64(0, c7)
+
+/* AArch32-specific registers */
+#define HMAIR0 SYSREG_32(4, c10, c2, 0)
+#define HMAIR1 SYSREG_32(4, c10, c2, 1)
#define HVBAR SYSREG_32(4, c12, c0, 0)

+#define ATS1HR SYSREG_32(4, c7, c8, 0)
+
+#define TLBIALL SYSREG_32(0, c8, c7, 0)
+#define TLBIALLIS SYSREG_32(0, c8, c3, 0)
+#define TLBIASID SYSREG_32(0, c8, c7, 2)
+#define TLBIASIDIS SYSREG_32(0, c8, c3, 2)
+#define TLBIMVA SYSREG_32(0, c8, c7, 1)
+#define TLBIMVAIS SYSREG_32(0, c8, c3, 1)
+#define TLBIMVAL SYSREG_32(0, c8, c7, 5)
+#define TLBIMVALIS SYSREG_32(0, c8, c3, 5)
+#define TLBIMVAA SYSREG_32(0, c8, c7, 3)
+#define TLBIMVAAIS SYSREG_32(0, c8, c3, 3)
+#define TLBIMVAAL SYSREG_32(0, c8, c7, 7)
+#define TLBIMVAALIS SYSREG_32(0, c8, c3, 7)
+#define TLBIALLH SYSREG_32(4, c8, c7, 0)
+#define TLBIALLHIS SYSREG_32(4, c8, c3, 0)
+#define TLBIALLNSNH SYSREG_32(4, c8, c7, 4)
+#define TLBIALLNSNHIS SYSREG_32(4, c8, c3, 4)
+#define TLBIMVAH SYSREG_32(4, c8, c7, 1)
+#define TLBIMVAHIS SYSREG_32(4, c8, c3, 1)
+#define TLBIMVALH SYSREG_32(4, c8, c7, 5)
+#define TLBIMVALHIS SYSREG_32(4, c8, c3, 5)
+#define TLBIIPAS2 SYSREG_32(4, c8, c4, 1)
+#define TLBIIPAS2IS SYSREG_32(4, c8, c0, 1)
+#define TLBIIPAS2L SYSREG_32(4, c8, c5, 5)
+#define TLBIIPAS2LIS SYSREG_32(4, c8, c0, 5)
+
#define SYSREG_32(...) 32, __VA_ARGS__
#define SYSREG_64(...) 64, __VA_ARGS__

diff --git a/hypervisor/arch/arm/mmu_hyp.c b/hypervisor/arch/arm/mmu_hyp.c
index c756576..fcfae05 100644
--- a/hypervisor/arch/arm/mmu_hyp.c
+++ b/hypervisor/arch/arm/mmu_hyp.c
@@ -14,11 +14,154 @@
#include <asm/setup_mmu.h>
#include <asm/sysregs.h>
#include <jailhouse/paging.h>
+#include <jailhouse/printk.h>
+
+/*
+ * Two identity mappings need to be created for enabling the MMU: one for the
+ * code and one for the stack.
+ * There should not currently be any conflict with the existing mappings, but we
+ * still make sure not to override anything by using the 'conflict' flag.
+ */
+static struct {
+ unsigned long addr;
+ unsigned long flags;
+ bool conflict;
+} id_maps[2];
+
+extern unsigned long trampoline_start, trampoline_end;
+
+static int set_id_map(int i, unsigned long address, unsigned long size)
+{
+ if (i >= ARRAY_SIZE(id_maps))
+ return -ENOMEM;
+
+ /* The trampoline code should be contained in one page. */
+ if ((address & PAGE_MASK) != ((address + size - 1) & PAGE_MASK)) {
+ printk("FATAL: Unable to IDmap more than one page at at time.\n");
+ return -E2BIG;
+ }
+
+ id_maps[i].addr = address;
+ id_maps[i].conflict = false;
+ id_maps[i].flags = PAGE_DEFAULT_FLAGS;
+
+ return 0;
+}
+
+static void create_id_maps(void)
+{
+ unsigned long i;
+ bool conflict;
+
+ for (i = 0; i < ARRAY_SIZE(id_maps); i++) {
+ conflict = (page_map_virt2phys(&hv_paging_structs,
+ id_maps[i].addr) != INVALID_PHYS_ADDR);
+ if (conflict) {
+ /*
+ * TODO: Get the flags, and update them if they are
+ * insufficient. Save the current flags in id_maps.
+ * This extraction should be implemented in the core.
+ */
+ } else {
+ page_map_create(&hv_paging_structs, id_maps[i].addr,
+ PAGE_SIZE, id_maps[i].addr, id_maps[i].flags,
+ PAGE_MAP_NON_COHERENT);
+ }
+ id_maps[i].conflict = conflict;
+ }
+}
+
+static void destroy_id_maps(void)
+{
+ unsigned long i;
+
+ for (i = 0; i < ARRAY_SIZE(id_maps); i++) {
+ if (id_maps[i].conflict) {
+ /* TODO: Switch back to the original flags */
+ } else {
+ page_map_destroy(&hv_paging_structs, id_maps[i].addr,
+ PAGE_SIZE, PAGE_MAP_NON_COHERENT);
+ }
+ }
+}
+
+/*
+ * This code is put in the id-mapped `.trampoline' section, allowing to enable
+ * and disable the MMU in a readable and portable fashion.
+ * This process makes the following function quite fragile: cpu_switch_phys2virt
+ * attempts to translate LR and SP using a call to the virtual address of
+ * phys2virt.
+ * Those two registers are thus supposed to be left intact by the whole MMU
+ * setup. The stack is all the same usable, since it is id-mapped as well.
+ */
+static void __attribute__((naked)) __attribute__((section(".trampoline")))
+setup_mmu_el2(struct per_cpu *cpu_data, phys2virt_t phys2virt, u64 ttbr)
+{
+ u32 tcr = T0SZ
+ | (TCR_RGN_WB_WA << TCR_IRGN0_SHIFT)
+ | (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT)
+ | (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)
+ | HTCR_RES1;
+ u32 sctlr;
+
+ /* Ensure that MMU is disabled. */
+ arm_read_sysreg(SCTLR_EL2, sctlr);
+ if (sctlr & SCTLR_M_BIT)
+ return;
+
+ /*
+ * This setup code is always preceded by a complete cache flush, so
+ * there is already a few memory barriers between the page table writes
+ * and here.
+ */
+ isb();
+ arm_write_sysreg(HMAIR0, DEFAULT_HMAIR0);
+ arm_write_sysreg(HMAIR1, DEFAULT_HMAIR1);
+ arm_write_sysreg(TTBR0_EL2, ttbr);
+ arm_write_sysreg(TCR_EL2, tcr);
+
+ /* Flush TLB */
+ arm_write_sysreg(TLBIALLH, 1);
+ dsb(nsh);
+
+ /* Enable stage-1 translation */
+ arm_read_sysreg(SCTLR_EL2, sctlr);
+ sctlr |= SCTLR_M_BIT;
+ arm_write_sysreg(SCTLR_EL2, sctlr);
+ isb();
+
+ /*
+ * Inlined epilogue that returns to switch_exception_level.
+ * Must not touch anything else than the stack
+ */
+ cpu_switch_phys2virt(phys2virt);
+
+ /* Not reached (cannot be a while(1), it confuses the compiler) */
+ asm volatile("b .\n");
+}
+
+static void check_mmu_map(unsigned long virt_addr, unsigned long phys_addr)
+{
+ unsigned long phys_base;
+ u64 par;
+
+ arm_write_sysreg(ATS1HR, virt_addr);
+ isb();
+ arm_read_sysreg(PAR_EL1, par);
+ phys_base = (unsigned long)(par & PAR_PA_MASK);
+ if ((par & PAR_F_BIT) || (phys_base != phys_addr)) {
+ printk("VA->PA check failed, expected %x, got %x\n",
+ phys_addr, phys_base);
+ while (1);
+ }
+}

/*
* Jumping to EL2 in the same C code represents an interesting challenge, since
* it will switch from virtual addresses to physical ones, and then back to
* virtual after setting up the EL2 MMU.
+ * To this end, the setup_mmu and cpu_switch_el2 functions are naked and must
+ * handle the stack themselves.
*/
int switch_exception_level(struct per_cpu *cpu_data)
{
@@ -29,6 +172,34 @@ int switch_exception_level(struct per_cpu *cpu_data)
phys2virt_t phys2virt = page_map_phys2hvirt;
virt2phys_t virt2phys = page_map_hvirt2phys;
unsigned long phys_bootstrap = virt2phys(&bootstrap_vectors);
+ struct per_cpu *phys_cpu_data = (struct per_cpu *)virt2phys(cpu_data);
+ unsigned long trampoline_phys = virt2phys((void *)&trampoline_start);
+ unsigned long trampoline_size = &trampoline_end - &trampoline_start;
+ unsigned long stack_virt = (unsigned long)cpu_data->stack;
+ unsigned long stack_phys = virt2phys((void *)stack_virt);
+ u64 ttbr_el2;
+
+ /* Check the paging structures as well as the MMU initialisation */
+ unsigned long jailhouse_base_phys = page_map_virt2phys(&hv_paging_structs,
+ JAILHOUSE_BASE);
+
+ /*
+ * paging struct won't be easily accessible when initializing el2, only
+ * the CPU datas will be readable at their physical address
+ */
+ ttbr_el2 = (u64)virt2phys(hv_paging_structs.root_table) & TTBR_MASK;
+
+ /*
+ * Mirror the mmu setup code, so that we are able to jump to the virtual
+ * address after enabling it.
+ * Those regions must fit on one page.
+ */
+
+ if (set_id_map(0, trampoline_phys, trampoline_size) != 0)
+ return -E2BIG;
+ if (set_id_map(1, stack_phys, PAGE_SIZE) != 0)
+ return -E2BIG;
+ create_id_maps();

cpu_switch_el2(phys_bootstrap, virt2phys);
/*
@@ -37,8 +208,30 @@ int switch_exception_level(struct per_cpu *cpu_data)
* addresses before returning, or else we are pretty much doomed.
*/

+ setup_mmu_el2(phys_cpu_data, phys2virt, ttbr_el2);
+
+ /* Sanity check */
+ check_mmu_map(JAILHOUSE_BASE, jailhouse_base_phys);
+
/* Set the new vectors once we're back to a sane, virtual state */
arm_write_sysreg(HVBAR, &hyp_vectors);

+ /* Remove the identity mapping */
+ destroy_id_maps();
+
return 0;
}
+
+int arch_map_device(void *paddr, void *vaddr, unsigned long size)
+{
+ return page_map_create(&hv_paging_structs, (unsigned long)paddr, size,
+ (unsigned long)vaddr,
+ PAGE_DEFAULT_FLAGS | S1_PTE_FLAG_DEVICE,
+ PAGE_MAP_NON_COHERENT);
+}
+
+int arch_unmap_device(void *vaddr, unsigned long size)
+{
+ return page_map_destroy(&hv_paging_structs, (unsigned long)vaddr, size,
+ PAGE_MAP_NON_COHERENT);
+}
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 43ef1eb..f895b16 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -10,6 +10,8 @@
* the COPYING file in the top-level directory.
*/

+#include <asm/percpu.h>
+#include <asm/platform.h>
#include <asm/setup.h>
#include <asm/sysregs.h>
#include <jailhouse/entry.h>
@@ -18,7 +20,7 @@

int arch_init_early(void)
{
- return -ENOSYS;
+ return arch_map_device(UART_BASE_PHYS, UART_BASE_VIRT, PAGE_SIZE);
}

int arch_cpu_init(struct per_cpu *cpu_data)
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:02:55 UTC
Permalink
This patch adds an optional include inside the hypervisor's linker
script in order to add sections specific to the architecture.
For instance on ARM, a trampoline section will need to be added to
safely enable and disable the EL2 MMU. To avoid overlapping
complications, it will be less than one page, and aligned.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/hypervisor.lds.S | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/hypervisor/hypervisor.lds.S b/hypervisor/hypervisor.lds.S
index 99a25e3..e1a57ce 100644
--- a/hypervisor/hypervisor.lds.S
+++ b/hypervisor/hypervisor.lds.S
@@ -22,6 +22,10 @@ SECTIONS
__text_start = .;
.text : { *(.text) }

+#ifdef ARCH_LINK
+#include <asm/sections.lds>
+#endif
+
. = ALIGN(16);
.rodata : { *(.rodata) }
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-08-09 07:29:12 UTC
Permalink
Post by Jean-Philippe Brucker
This patch adds an optional include inside the hypervisor's linker
script in order to add sections specific to the architecture.
For instance on ARM, a trampoline section will need to be added to
safely enable and disable the EL2 MMU. To avoid overlapping
complications, it will be less than one page, and aligned.
---
hypervisor/hypervisor.lds.S | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/hypervisor/hypervisor.lds.S b/hypervisor/hypervisor.lds.S
index 99a25e3..e1a57ce 100644
--- a/hypervisor/hypervisor.lds.S
+++ b/hypervisor/hypervisor.lds.S
@@ -22,6 +22,10 @@ SECTIONS
__text_start = .;
.text : { *(.text) }
+#ifdef ARCH_LINK
+#include <asm/sections.lds>
+#endif
Let's have asm/sections.h for everyone, and there we define something
like ARCH_SECTIONS (empty on x86) that can then be used here
unconditionally. I'm trying to keep #ifdef usage low.

BTW, I'm still surprised that you were able to leave the core
practically untouched.

Jan
Jean-Philippe Brucker
2014-08-09 16:54:40 UTC
Permalink
Post by Jan Kiszka
Post by Jean-Philippe Brucker
This patch adds an optional include inside the hypervisor's linker
script in order to add sections specific to the architecture.
For instance on ARM, a trampoline section will need to be added to
safely enable and disable the EL2 MMU. To avoid overlapping
complications, it will be less than one page, and aligned.
---
hypervisor/hypervisor.lds.S | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/hypervisor/hypervisor.lds.S b/hypervisor/hypervisor.lds.S
index 99a25e3..e1a57ce 100644
--- a/hypervisor/hypervisor.lds.S
+++ b/hypervisor/hypervisor.lds.S
@@ -22,6 +22,10 @@ SECTIONS
__text_start = .;
.text : { *(.text) }
+#ifdef ARCH_LINK
+#include <asm/sections.lds>
+#endif
Let's have asm/sections.h for everyone, and there we define something
like ARCH_SECTIONS (empty on x86) that can then be used here
unconditionally. I'm trying to keep #ifdef usage low.
BTW, I'm still surprised that you were able to leave the core
practically untouched.
Apart from the cell config details I mentioned in the cover letter, most
of the arm port seems to fit well with the core, even though the
setup/disable process is quite different from the x86 side.

There is some FIXMEs in the paging code that would need modifications in
the core. Typedefs for virtual/physical addresses and for the paging
flags, for instance, to enable >4GB address spaces on 32bit, and to use
two levels of stage-2 page tables instead of three.

I had a few memory leaks when destroying and re-creating cells. I solved
one by explicitly calling destroy_cpu_set in cell_destroy. (Since a u64
cpu set in a cell config cannot fit in an unsigned long, a whole page is
allocated, but not freed)
I haven't had time to hunt the other ones yet. I guess it's somewhere in
the arm port.

Thanks,
Jean-Philippe
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:01 UTC
Permalink
Since the GIC uses MMIOs, its initialisation must be done at EL2. This
is why arch_cpu_init first calls irqchip_init on the master CPU, to map
the devices, and then irqchip_cpu_init on all CPUs.

The aim of this patch is to allow support for both GICv2 and GICv3. It
abstracts the GIC operations by using `struct irqchip_ops', and fills it
with the right device hooks after detecting which irqchip is available.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 1 +
hypervisor/arch/arm/include/asm/irqchip.h | 52 +++++++++++++++++++++++++++
hypervisor/arch/arm/irqchip.c | 55 +++++++++++++++++++++++++++++
hypervisor/arch/arm/setup.c | 9 +++++
4 files changed, 117 insertions(+)
create mode 100644 hypervisor/arch/arm/include/asm/irqchip.h
create mode 100644 hypervisor/arch/arm/irqchip.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 0016e15..3932f85 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -16,6 +16,7 @@ always := built-in.o

obj-y := entry.o dbg-write.o exception.o setup.o lib.o traps.o
obj-y += paging.o mmu_hyp.o mmu_cell.o
+obj-y += irqchip.o
obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o

# Needed for kconfig
diff --git a/hypervisor/arch/arm/include/asm/irqchip.h b/hypervisor/arch/arm/include/asm/irqchip.h
new file mode 100644
index 0000000..0ef5fe0
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/irqchip.h
@@ -0,0 +1,52 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_IRQCHIP_H
+#define _JAILHOUSE_ASM_IRQCHIP_H
+
+#include <asm/percpu.h>
+
+#ifndef __ASSEMBLY__
+
+struct sgi {
+ /*
+ * Routing mode values:
+ * 0: use aff3.aff2.aff1.targets
+ * 1: all processors in the cell except this CPU
+ * 2: only this CPU
+ */
+ u8 routing_mode;
+ /* GICv2 only uses 8bit in targets, and no affinity routing */
+ u8 aff1;
+ u8 aff2;
+ /* Only available on 64-bit, when CTLR.A3V is 1 */
+ u8 aff3;
+ u16 targets;
+ u16 id;
+};
+
+struct irqchip_ops {
+ int (*init)(void);
+ int (*cpu_init)(struct per_cpu *cpu_data);
+
+ int (*send_sgi)(struct sgi *sgi);
+ void (*handle_irq)(struct per_cpu *cpu_data);
+};
+
+int irqchip_init(void);
+int irqchip_cpu_init(struct per_cpu *cpu_data);
+
+int irqchip_send_sgi(struct sgi *sgi);
+void irqchip_handle_irq(struct per_cpu *cpu_data);
+
+#endif /* __ASSEMBLY__ */
+#endif /* _JAILHOUSE_ASM_IRQCHIP_H */
diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
new file mode 100644
index 0000000..67eef60
--- /dev/null
+++ b/hypervisor/arch/arm/irqchip.c
@@ -0,0 +1,55 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/irqchip.h>
+#include <asm/sysregs.h>
+#include <jailhouse/entry.h>
+#include <jailhouse/paging.h>
+#include <jailhouse/printk.h>
+#include <jailhouse/string.h>
+
+/*
+ * The init function must be called after the MMU setup, and whilst in the
+ * per-cpu setup, which means that a bool must be set by the master CPU
+ */
+static bool irqchip_is_init;
+static struct irqchip_ops irqchip;
+
+void irqchip_handle_irq(struct per_cpu *cpu_data)
+{
+ irqchip.handle_irq(cpu_data);
+}
+
+int irqchip_send_sgi(struct sgi *sgi)
+{
+ return irqchip.send_sgi(sgi);
+}
+
+int irqchip_cpu_init(struct per_cpu *cpu_data)
+{
+ if (irqchip.cpu_init)
+ return irqchip.cpu_init(cpu_data);
+
+ return 0;
+}
+
+int irqchip_init(void)
+{
+ /* Only executed on master CPU */
+ if (irqchip_is_init)
+ return 0;
+
+ memset(&irqchip, 0, sizeof(irqchip));
+ irqchip_is_init = true;
+
+ return -ENODEV;
+}
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index ef18a7d..3afabbf 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -11,6 +11,7 @@
*/

#include <asm/control.h>
+#include <asm/irqchip.h>
#include <asm/percpu.h>
#include <asm/platform.h>
#include <asm/setup.h>
@@ -74,6 +75,14 @@ int arch_cpu_init(struct per_cpu *cpu_data)
arm_write_sysreg(HCR, hcr);

err = arch_mmu_cpu_cell_init(cpu_data);
+ if (err)
+ return err;
+
+ err = irqchip_init();
+ if (err)
+ return err;
+
+ err = irqchip_cpu_init(cpu_data);

return err;
}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:15 UTC
Permalink
This patch allows to enter new guests with cache disabled. By cleaning
the data caches, it makes sure that the recently written guest code and
datas are present in memory before returning to an environment with only
a stage-2 MMU.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 19 ++++++++++++++++++-
hypervisor/arch/arm/include/asm/cell.h | 4 ++++
hypervisor/arch/arm/include/asm/control.h | 2 ++
hypervisor/arch/arm/include/asm/processor.h | 7 +++++++
hypervisor/arch/arm/mmu_cell.c | 26 ++++++++++++++++++++++++++
5 files changed, 57 insertions(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index f8941a4..1988850 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -12,6 +12,7 @@

#include <asm/control.h>
#include <asm/irqchip.h>
+#include <asm/processor.h>
#include <asm/sysregs.h>
#include <asm/traps.h>
#include <jailhouse/control.h>
@@ -20,6 +21,8 @@

static void arch_reset_el1(struct registers *regs)
{
+ u32 sctlr;
+
/* Wipe all banked and usr regs */
memset(regs, 0, sizeof(struct registers));

@@ -49,7 +52,9 @@ static void arch_reset_el1(struct registers *regs)
arm_write_banked_reg(SPSR_fiq, 0);

/* Wipe the system registers */
- arm_write_sysreg(SCTLR_EL1, 0);
+ arm_read_sysreg(SCTLR_EL1, sctlr);
+ sctlr = sctlr & ~SCTLR_MASK;
+ arm_write_sysreg(SCTLR_EL1, sctlr);
arm_write_sysreg(ACTLR_EL1, 0);
arm_write_sysreg(CPACR_EL1, 0);
arm_write_sysreg(CONTEXTIDR_EL1, 0);
@@ -87,11 +92,19 @@ static void arch_reset_self(struct per_cpu *cpu_data)
{
int err;
unsigned long reset_address;
+ struct cell *cell = cpu_data->cell;
struct registers *regs = guest_regs(cpu_data);

err = arch_mmu_cpu_cell_init(cpu_data);
if (err)
printk("MMU setup failed\n");
+ /*
+ * On the first CPU to reach this, write all cell datas to memory so it
+ * can be started with caches disabled.
+ * On all CPUs, invalidate the instruction caches to take into account
+ * the potential new instructions.
+ */
+ arch_cell_caches_flush(cell);

/*
* We come from the IRQ handler, but we won't return there, so the IPI
@@ -156,12 +169,16 @@ void arch_resume_cpu(unsigned int cpu_id)
/* CPU must be stopped */
void arch_park_cpu(unsigned int cpu_id)
{
+ struct per_cpu *cpu_data = per_cpu(cpu_id);
+
/*
* Reset always follows park_cpu, so we just need to make sure that the
* CPU is suspended
*/
if (psci_wait_cpu_stopped(cpu_id) != 0)
printk("ERROR: CPU%d is supposed to be stopped\n", cpu_id);
+ else
+ cpu_data->cell->arch.needs_flush = true;
}

/* CPU must be stopped */
diff --git a/hypervisor/arch/arm/include/asm/cell.h b/hypervisor/arch/arm/include/asm/cell.h
index 88fe125..8f65a96 100644
--- a/hypervisor/arch/arm/include/asm/cell.h
+++ b/hypervisor/arch/arm/include/asm/cell.h
@@ -13,6 +13,7 @@
#ifndef _JAILHOUSE_ASM_CELL_H
#define _JAILHOUSE_ASM_CELL_H

+#include <asm/spinlock.h>
#include <asm/types.h>

#ifndef __ASSEMBLY__
@@ -23,6 +24,9 @@

struct arch_cell {
struct paging_structures mm;
+
+ spinlock_t caches_lock;
+ bool needs_flush;
};

struct cell {
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index 2ada50d..592ee29 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -25,6 +25,8 @@
#ifndef __ASSEMBLY__

void arch_cpu_dcaches_flush(unsigned int action);
+void arch_cpu_icache_flush(void);
+void arch_cell_caches_flush(struct cell *cell);
int arch_mmu_cell_init(struct cell *cell);
void arch_mmu_cell_destroy(struct cell *cell);
int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data);
diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 00ffcf0..9c1fe75 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -59,6 +59,13 @@
#define SCTLR_AFE_BIT (1 << 29)
#define SCTLR_TE_BIT (1 << 30)

+/* Bits to wipe on cell reset */
+#define SCTLR_MASK (SCTLR_M_BIT | SCTLR_A_BIT | SCTLR_C_BIT \
+ | SCTLR_I_BIT | SCTLR_V_BIT | SCTLR_WXN_BIT \
+ | SCTLR_UWXN_BIT | SCTLR_FI_BIT | SCTLR_EE_BIT \
+ | SCTLR_TRE_BIT | SCTLR_AFE_BIT | SCTLR_TE_BIT)
+
+
#define HCR_TRVM_BIT (1 << 30)
#define HCR_TVM_BIT (1 << 26)
#define HCR_HDC_BIT (1 << 29)
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index 968ca3a..e7e57f7 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -104,3 +104,29 @@ int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)

return 0;
}
+
+void arch_cell_caches_flush(struct cell *cell)
+{
+ /* Only the first CPU needs to clean the data caches */
+ spin_lock(&cell->arch.caches_lock);
+ if (cell->arch.needs_flush) {
+ /*
+ * Since there is no way to know which virtual addresses have been used
+ * by the root cell to write the new cell's data, a complete clean has
+ * to be performed.
+ */
+ arch_cpu_dcaches_flush(CACHES_CLEAN_INVALIDATE);
+ cell->arch.needs_flush = false;
+ }
+ spin_unlock(&cell->arch.caches_lock);
+
+ /*
+ * New instructions may have been written, so the I-cache needs to be
+ * invalidated even though the VMID is different.
+ * A complete invalidation is the only way to ensure all virtual aliases
+ * of these memory locations are invalidated, whatever the cache type.
+ */
+ arch_cpu_icache_flush();
+
+ /* ERET will ensure context synchronization */
+}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:08 UTC
Permalink
In GICv3, IPIs are sent by writing the system register `ICC_SGIR'.
This patch moderates those writes by injecting the IPIs into the
appropriate cells, and issues an hypervisor IPI to let the cell's CPUs
fill their list registers.

Since there shouldn't be many cases where Jailhouse needs to emulate
system register accesses, this patch keeps it simple, by calling directly
the GICv3 function from the trap handler, without abstracting it through
irqchip.
However, this change adds an ungraceful ifdef, since the GICv2 and v3
headers are mutually exclusive for the moment.
In GICv2, the SGIR register is 32bit and will be handled directly in the
gic-common.c code, using an MMIO trap of the distributor accesses.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 6 ++++-
hypervisor/arch/arm/gic-v3.c | 35 +++++++++++++++++++++++++++++
hypervisor/arch/arm/include/asm/control.h | 2 ++
hypervisor/arch/arm/include/asm/gic_v3.h | 3 +++
hypervisor/arch/arm/include/asm/traps.h | 1 +
hypervisor/arch/arm/traps.c | 33 +++++++++++++++++++++++++++
6 files changed, 79 insertions(+), 1 deletion(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 4a6011c..6cdb133 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -16,7 +16,11 @@

void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
{
-
+ switch (irqn) {
+ case SGI_INJECT:
+ irqchip_inject_pending(cpu_data);
+ break;
+ }
}

void arch_handle_exit(struct per_cpu *cpu_data, struct registers *regs)
diff --git a/hypervisor/arch/arm/gic-v3.c b/hypervisor/arch/arm/gic-v3.c
index 6c75561..d67e59c 100644
--- a/hypervisor/arch/arm/gic-v3.c
+++ b/hypervisor/arch/arm/gic-v3.c
@@ -17,6 +17,7 @@
#include <asm/platform.h>
#include <asm/setup.h>
#include <asm/control.h>
+#include <asm/traps.h>
#include <jailhouse/control.h>
#include <jailhouse/printk.h>
#include <jailhouse/processor.h>
@@ -149,6 +150,40 @@ static int gic_send_sgi(struct sgi *sgi)
return 0;
}

+int gicv3_handle_sgir_write(struct per_cpu *cpu_data, u64 sgir)
+{
+ struct sgi sgi;
+ struct cell *cell = cpu_data->cell;
+ unsigned int cpu;
+ unsigned long this_cpu = cpu_data->cpu_id;
+ unsigned long routing_mode = !!(sgir & ICC_SGIR_ROUTING_BIT);
+ unsigned long targets = sgir & ICC_SGIR_TARGET_MASK;
+ u32 irq = sgir >> ICC_SGIR_IRQN_SHIFT & 0xf;
+
+ /* FIXME: clusters are not supported yet. */
+ sgi.targets = 0;
+ sgi.routing_mode = routing_mode;
+ sgi.aff1 = sgir >> ICC_SGIR_AFF1_SHIFT & 0xff;
+ sgi.aff2 = sgir >> ICC_SGIR_AFF2_SHIFT & 0xff;
+ sgi.aff3 = sgir >> ICC_SGIR_AFF3_SHIFT & 0xff;
+ sgi.id = SGI_INJECT;
+
+ for_each_cpu_except(cpu, cell->cpu_set, this_cpu) {
+ if (routing_mode == 0 && !test_bit(cpu, &targets))
+ continue;
+ else if (routing_mode == 1 && cpu == this_cpu)
+ continue;
+
+ irqchip_set_pending(per_cpu(cpu), irq, false);
+ sgi.targets |= (1 << cpu);
+ }
+
+ /* Let the other CPUS inject their SGIs */
+ gic_send_sgi(&sgi);
+
+ return TRAP_HANDLED;
+}
+
/*
* Handle the maintenance interrupt, the rest is injected into the cell.
* Return true when the IRQ has been handled by the hyp.
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index 1e90148..ed571a2 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -16,6 +16,8 @@
#include <asm/cell.h>
#include <asm/percpu.h>

+#define SGI_INJECT 0
+
#ifndef __ASSEMBLY__

int arch_mmu_cell_init(struct cell *cell);
diff --git a/hypervisor/arch/arm/include/asm/gic_v3.h b/hypervisor/arch/arm/include/asm/gic_v3.h
index 6768e7b..edc8767 100644
--- a/hypervisor/arch/arm/include/asm/gic_v3.h
+++ b/hypervisor/arch/arm/include/asm/gic_v3.h
@@ -244,5 +244,8 @@ static inline void gic_write_lr(unsigned int n, u64 val)
}
}

+struct per_cpu;
+int gicv3_handle_sgir_write(struct per_cpu *cpu_data, u64 sgir);
+
#endif /* __ASSEMBLY__ */
#endif /* _JAILHOUSE_ASM_GIC_V3_H */
diff --git a/hypervisor/arch/arm/include/asm/traps.h b/hypervisor/arch/arm/include/asm/traps.h
index 6965f81..b18709b 100644
--- a/hypervisor/arch/arm/include/asm/traps.h
+++ b/hypervisor/arch/arm/include/asm/traps.h
@@ -23,6 +23,7 @@
enum trap_return {
TRAP_HANDLED = 1,
TRAP_UNHANDLED = 0,
+ TRAP_FORBIDDEN = -1,
};

struct trap_context {
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index 9de1657..1016ece 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -15,6 +15,8 @@
*/

#include <asm/control.h>
+#include <asm/gic_common.h>
+#include <asm/platform.h>
#include <asm/traps.h>
#include <asm/sysregs.h>
#include <jailhouse/printk.h>
@@ -195,8 +197,39 @@ static int arch_handle_hvc(struct per_cpu *cpu_data, struct trap_context *ctx)
return TRAP_HANDLED;
}

+static int arch_handle_cp15_64(struct per_cpu *cpu_data, struct trap_context *ctx)
+{
+ unsigned long rt_val, rt2_val;
+ u32 opc1 = ctx->esr >> 16 & 0x7;
+ u32 rt2 = ctx->esr >> 10 & 0xf;
+ u32 rt = ctx->esr >> 5 & 0xf;
+ u32 crm = ctx->esr >> 1 & 0xf;
+ u32 read = ctx->esr & 1;
+
+ if (!read) {
+ access_cell_reg(ctx, rt, &rt_val, true);
+ access_cell_reg(ctx, rt2, &rt2_val, true);
+ }
+
+#ifdef CONFIG_ARM_GIC_V3
+ /* Trapped ICC_SGI1R write */
+ if (!read && opc1 == 0 && crm == 12) {
+ arch_skip_instruction(ctx);
+ return gicv3_handle_sgir_write(cpu_data,
+ (u64)rt2_val << 32 | rt_val);
+ }
+#else
+ /* Avoid `unused' warning... */
+ crm = crm;
+ opc1 = opc1;
+#endif
+
+ return TRAP_UNHANDLED;
+}
+
static const trap_handler trap_handlers[38] =
{
+ [ESR_EC_CP15_64] = arch_handle_cp15_64,
[ESR_EC_HVC] = arch_handle_hvc,
};
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:06 UTC
Permalink
When emulating instructions, the trap handler will need to access the
cell registers according to the guest's processor mode when the trap
occurred, which is stored inside the saved PSR.
This patch allows to directly read and write the banked registers.
If HSR reports a load into r14 and the mode was IRQ for instance, the
hypervisor will need to write something into LR_IRQ instead of the LR
saved on the stack.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/include/asm/traps.h | 47 +++++++++++++++++++++
hypervisor/arch/arm/traps.c | 68 +++++++++++++++++++++++++++++++
2 files changed, 115 insertions(+)

diff --git a/hypervisor/arch/arm/include/asm/traps.h b/hypervisor/arch/arm/include/asm/traps.h
index 9bab7e9..6965f81 100644
--- a/hypervisor/arch/arm/include/asm/traps.h
+++ b/hypervisor/arch/arm/include/asm/traps.h
@@ -16,6 +16,7 @@
#include <asm/head.h>
#include <asm/percpu.h>
#include <asm/types.h>
+#include <jailhouse/printk.h>

#ifndef __ASSEMBLY__

@@ -28,6 +29,7 @@ struct trap_context {
unsigned long *regs;
u32 esr;
u32 cpsr;
+ u32 pc;
};

typedef int (*trap_handler)(struct per_cpu *cpu_data,
@@ -39,5 +41,50 @@ typedef int (*trap_handler)(struct per_cpu *cpu_data,
#define arm_write_banked_reg(reg, val) \
asm volatile ("msr " #reg ", %0\n" : : "r" (val))

+#define _access_banked(reg, val, is_read) \
+ do { \
+ if (is_read) \
+ arm_write_banked_reg(reg, val); \
+ else \
+ arm_read_banked_reg(reg, val); \
+ } while (0)
+
+#define access_banked_reg(mode, reg, val, is_read) \
+ do { \
+ switch (reg) { \
+ case 13: \
+ _access_banked(SP_##mode, *val, is_read); \
+ break; \
+ case 14: \
+ _access_banked(LR_##mode, *val, is_read); \
+ break; \
+ default: \
+ printk("ERROR: access r%d in "#mode"\n", reg); \
+ } \
+ } while (0)
+
+static inline void access_fiq_reg(u8 reg, unsigned long *val, bool is_read)
+{
+ switch (reg) {
+ case 8: _access_banked(r8_fiq, *val, is_read); break;
+ case 9: _access_banked(r9_fiq, *val, is_read); break;
+ case 10: _access_banked(r10_fiq, *val, is_read); break;
+ case 11: _access_banked(r11_fiq, *val, is_read); break;
+ case 12: _access_banked(r12_fiq, *val, is_read); break;
+ default:
+ /* Use existing error reporting */
+ access_banked_reg(fiq, reg, val, is_read);
+ }
+}
+
+static inline void access_usr_reg(struct trap_context *ctx, u8 reg,
+ unsigned long *val, bool is_read)
+{
+ if (is_read)
+ *val = ctx->regs[reg];
+ else
+ ctx->regs[reg] = *val;
+}
+
#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_TRAPS_H */
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index 95628c6..7367357 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -16,6 +16,71 @@
#include <jailhouse/printk.h>
#include <jailhouse/control.h>

+static void access_cell_reg(struct trap_context *ctx, u8 reg,
+ unsigned long *val, bool is_read)
+{
+ unsigned long mode = ctx->cpsr & PSR_MODE_MASK;
+
+ switch (reg) {
+ case 0 ... 7:
+ access_usr_reg(ctx, reg, val, is_read);
+ break;
+ case 8 ... 12:
+ if (mode == PSR_FIQ_MODE)
+ access_fiq_reg(reg, val, is_read);
+ else
+ access_usr_reg(ctx, reg, val, is_read);
+ break;
+ case 13 ... 14:
+ switch (mode) {
+ case PSR_USR_MODE:
+ case PSR_SYS_MODE:
+ /*
+ * lr is saved on the stack, as it is not banked in HYP
+ * mode. sp is banked, so lr is at offset 13 in the USR
+ * regs.
+ */
+ if (reg == 13)
+ access_banked_reg(usr, reg, val, is_read);
+ else
+ access_usr_reg(ctx, 13, val, is_read);
+ break;
+ case PSR_SVC_MODE:
+ access_banked_reg(svc, reg, val, is_read);
+ break;
+ case PSR_UND_MODE:
+ access_banked_reg(und, reg, val, is_read);
+ break;
+ case PSR_ABT_MODE:
+ access_banked_reg(abt, reg, val, is_read);
+ break;
+ case PSR_IRQ_MODE:
+ access_banked_reg(irq, reg, val, is_read);
+ break;
+ case PSR_FIQ_MODE:
+ access_banked_reg(fiq, reg, val, is_read);
+ break;
+ }
+ break;
+ case 15:
+ /*
+ * A trapped instruction that accesses the PC? Probably a bug,
+ * but nothing seems to prevent it.
+ */
+ printk("WARNING: trapped instruction attempted to explicitly "
+ "access the PC.\n");
+ if (is_read)
+ *val = ctx->pc;
+ else
+ ctx->pc = *val;
+ break;
+ default:
+ /* Programming error */
+ printk("ERROR: attempt to write register %d\n", reg);
+ break;
+ }
+}
+
static int arch_handle_hvc(struct per_cpu *cpu_data, struct trap_context *ctx)
{
unsigned long *regs = ctx->regs;
@@ -36,6 +101,7 @@ void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs)
u32 exception_class;
int ret = TRAP_UNHANDLED;

+ arm_read_banked_reg(ELR_hyp, ctx.pc);
arm_read_banked_reg(SPSR_hyp, ctx.cpsr);
arm_read_sysreg(ESR_EL2, ctx.esr);
exception_class = ESR_EC(ctx.esr);
@@ -49,4 +115,6 @@ void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs)
cpu_data->cpu_id, ctx.esr);
while(1);
}
+
+ arm_write_banked_reg(ELR_hyp, ctx.pc);
}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:05 UTC
Permalink
The GIC IRQ handler loops over the ack register to get all pending IRQs.
It then dispatches them either in the common SGI handler, or injects
them into the cell.
A first attempt to directly inject an IRQ by writing to a free list
register is done. If it fails, the IRQ is appended to the pending list,
and an attempt will be made later on, once a maintenance interrupt is
received.
Injection in the GIC is a little bit expensive for the moment, because
it needs to iterate over all list registers that have a valid interrupt,
to check that there will be no duplication. This could be optimized by
only checking the `active' GIC register for SPIs and PPIs.
A future patch will also add proper handling of the maintenance bits in
the vGIC.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 5 ++
hypervisor/arch/arm/gic-v3.c | 101 ++++++++++++++++++++++++++++
hypervisor/arch/arm/include/asm/control.h | 1 +
hypervisor/arch/arm/include/asm/irqchip.h | 1 +
hypervisor/arch/arm/include/asm/platform.h | 2 +
hypervisor/arch/arm/irqchip.c | 23 +++++++
6 files changed, 133 insertions(+)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index e740977..4a6011c 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -14,6 +14,11 @@
#include <asm/irqchip.h>
#include <jailhouse/printk.h>

+void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
+{
+
+}
+
void arch_handle_exit(struct per_cpu *cpu_data, struct registers *regs)
{
switch (regs->exit_reason) {
diff --git a/hypervisor/arch/arm/gic-v3.c b/hypervisor/arch/arm/gic-v3.c
index b0c5dac..6c75561 100644
--- a/hypervisor/arch/arm/gic-v3.c
+++ b/hypervisor/arch/arm/gic-v3.c
@@ -16,6 +16,8 @@
#include <asm/gic_common.h>
#include <asm/platform.h>
#include <asm/setup.h>
+#include <asm/control.h>
+#include <jailhouse/control.h>
#include <jailhouse/printk.h>
#include <jailhouse/processor.h>

@@ -147,12 +149,111 @@ static int gic_send_sgi(struct sgi *sgi)
return 0;
}

+/*
+ * Handle the maintenance interrupt, the rest is injected into the cell.
+ * Return true when the IRQ has been handled by the hyp.
+ */
+static bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn)
+{
+ if (irqn == MAINTENANCE_IRQ) {
+ irqchip_inject_pending(cpu_data);
+ return true;
+ }
+
+ irqchip_set_pending(cpu_data, irqn, true);
+
+ return false;
+}
+
static void gic_handle_irq(struct per_cpu *cpu_data)
{
+ bool handled = false;
+ u32 irq_id;
+
+ while (1) {
+ /* Read ICC_IAR1: set 'active' state */
+ arm_read_sysreg(ICC_IAR1_EL1, irq_id);
+
+ if (irq_id == 0x3ff) /* Spurious IRQ */
+ break;
+
+ /* Handle IRQ */
+ if (is_sgi(irq_id)) {
+ arch_handle_sgi(cpu_data, irq_id);
+ handled = true;
+ } else {
+ handled = arch_handle_phys_irq(cpu_data, irq_id);
+ }
+
+ /*
+ * Write ICC_EOIR1: drop priority, but stay active if handled is
+ * false.
+ * This allows to not be re-interrupted by a level-triggered
+ * interrupt that needs handling in the guest (e.g. timer)
+ */
+ arm_write_sysreg(ICC_EOIR1_EL1, irq_id);
+ /* Deactivate if necessary */
+ if (handled)
+ arm_write_sysreg(ICC_DIR_EL1, irq_id);
+ }
}

static int gic_inject_irq(struct per_cpu *cpu_data, struct pending_irq *irq)
{
+ int i;
+ int free_lr = -1;
+ u32 elsr;
+ u64 lr;
+
+ arm_read_sysreg(ICH_ELSR_EL2, elsr);
+ for (i = 0; i < gic_num_lr; i++) {
+ if ((elsr >> i) & 1) {
+ /* Entry is invalid, candidate for injection */
+ if (free_lr == -1)
+ free_lr = i;
+ continue;
+ }
+
+ /*
+ * Entry is in use, check that it doesn't match the one we want
+ * to inject.
+ */
+ lr = gic_read_lr(i);
+
+ /*
+ * A strict phys->virt id mapping is used for SPIs, so this test
+ * should be sufficient.
+ */
+ if ((u32)lr == irq->virt_id)
+ return -EINVAL;
+ }
+
+ if (free_lr == -1) {
+ u32 hcr;
+ /*
+ * All list registers are in use, trigger a maintenance
+ * interrupt once they are available again.
+ */
+ arm_read_sysreg(ICH_HCR_EL2, hcr);
+ hcr |= ICH_HCR_UIE;
+ arm_write_sysreg(ICH_HCR_EL2, hcr);
+
+ return -EBUSY;
+ }
+
+ lr = irq->virt_id;
+ /* Only group 1 interrupts */
+ lr |= ICH_LR_GROUP_BIT;
+ lr |= ICH_LR_PENDING;
+ if (irq->hw) {
+ lr |= ICH_LR_HW_BIT;
+ lr |= (u64)irq->type.irq << ICH_LR_PHYS_ID_SHIFT;
+ } else if (irq->type.sgi.maintenance) {
+ lr |= ICH_LR_SGI_EOI;
+ }
+
+ gic_write_lr(free_lr, lr);
+
return 0;
}

diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index c974bc1..1e90148 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -20,6 +20,7 @@

int arch_mmu_cell_init(struct cell *cell);
int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data);
+void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn);
void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs);
void arch_handle_exit(struct per_cpu *cpu_data, struct registers *guest_regs);

diff --git a/hypervisor/arch/arm/include/asm/irqchip.h b/hypervisor/arch/arm/include/asm/irqchip.h
index 3fa37fd..a6b05e4 100644
--- a/hypervisor/arch/arm/include/asm/irqchip.h
+++ b/hypervisor/arch/arm/include/asm/irqchip.h
@@ -81,6 +81,7 @@ void irqchip_handle_irq(struct per_cpu *cpu_data);
int irqchip_inject_pending(struct per_cpu *cpu_data);
int irqchip_insert_pending(struct per_cpu *cpu_data, struct pending_irq *irq);
int irqchip_remove_pending(struct per_cpu *cpu_data, struct pending_irq *irq);
+int irqchip_set_pending(struct per_cpu *cpu_data, u32 irq_id, bool try_inject);

#endif /* __ASSEMBLY__ */
#endif /* _JAILHOUSE_ASM_IRQCHIP_H */
diff --git a/hypervisor/arch/arm/include/asm/platform.h b/hypervisor/arch/arm/include/asm/platform.h
index a69d744..8316689 100644
--- a/hypervisor/arch/arm/include/asm/platform.h
+++ b/hypervisor/arch/arm/include/asm/platform.h
@@ -36,6 +36,8 @@
# include <asm/gic_v3.h>
# endif /* GIC */

+# define MAINTENANCE_IRQ 25
+
#endif /* CONFIG_ARCH_VEXPRESS */
#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_PLATFORM_H */
diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
index 75acdd7..41f9754 100644
--- a/hypervisor/arch/arm/irqchip.c
+++ b/hypervisor/arch/arm/irqchip.c
@@ -105,6 +105,29 @@ int irqchip_insert_pending(struct per_cpu *cpu_data, struct pending_irq *irq)
return 0;
}

+int irqchip_set_pending(struct per_cpu *cpu_data, u32 irq_id, bool try_inject)
+{
+ struct pending_irq pending;
+
+ pending.virt_id = irq_id;
+ /* Priority must be less than ICC_PMR */
+ pending.priority = 0;
+
+ if (is_sgi(irq_id)) {
+ pending.hw = 0;
+ pending.type.sgi.maintenance = 0;
+ pending.type.sgi.cpuid = 0;
+ } else {
+ pending.hw = 1;
+ pending.type.irq = irq_id;
+ }
+
+ if (try_inject && irqchip.inject_irq(cpu_data, &pending) == 0)
+ return 0;
+
+ return irqchip_insert_pending(cpu_data, &pending);
+}
+
/*
* Only executed by `irqchip_inject_pending' on a CPU to inject its own stuff.
*/
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:13 UTC
Permalink
This patch allows to boot the new guests with an -almost- empty context.
The reset function is still missing Performance Monitor, debug, SIMD and
float registers.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 71 ++++++++++++++++++++++++++++-
hypervisor/arch/arm/include/asm/sysregs.h | 41 ++++++++++++++++-
2 files changed, 109 insertions(+), 3 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 9df8f04..f8941a4 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -12,11 +12,77 @@

#include <asm/control.h>
#include <asm/irqchip.h>
+#include <asm/sysregs.h>
#include <asm/traps.h>
#include <jailhouse/control.h>
#include <jailhouse/printk.h>
#include <jailhouse/string.h>

+static void arch_reset_el1(struct registers *regs)
+{
+ /* Wipe all banked and usr regs */
+ memset(regs, 0, sizeof(struct registers));
+
+ arm_write_banked_reg(SP_usr, 0);
+ arm_write_banked_reg(SP_svc, 0);
+ arm_write_banked_reg(SP_abt, 0);
+ arm_write_banked_reg(SP_und, 0);
+ arm_write_banked_reg(SP_svc, 0);
+ arm_write_banked_reg(SP_irq, 0);
+ arm_write_banked_reg(SP_fiq, 0);
+ arm_write_banked_reg(LR_svc, 0);
+ arm_write_banked_reg(LR_abt, 0);
+ arm_write_banked_reg(LR_und, 0);
+ arm_write_banked_reg(LR_svc, 0);
+ arm_write_banked_reg(LR_irq, 0);
+ arm_write_banked_reg(LR_fiq, 0);
+ arm_write_banked_reg(R8_fiq, 0);
+ arm_write_banked_reg(R9_fiq, 0);
+ arm_write_banked_reg(R10_fiq, 0);
+ arm_write_banked_reg(R11_fiq, 0);
+ arm_write_banked_reg(R12_fiq, 0);
+ arm_write_banked_reg(SPSR_svc, 0);
+ arm_write_banked_reg(SPSR_abt, 0);
+ arm_write_banked_reg(SPSR_und, 0);
+ arm_write_banked_reg(SPSR_svc, 0);
+ arm_write_banked_reg(SPSR_irq, 0);
+ arm_write_banked_reg(SPSR_fiq, 0);
+
+ /* Wipe the system registers */
+ arm_write_sysreg(SCTLR_EL1, 0);
+ arm_write_sysreg(ACTLR_EL1, 0);
+ arm_write_sysreg(CPACR_EL1, 0);
+ arm_write_sysreg(CONTEXTIDR_EL1, 0);
+ arm_write_sysreg(PAR_EL1, 0);
+ arm_write_sysreg(TTBR0_EL1, 0);
+ arm_write_sysreg(TTBR1_EL1, 0);
+ arm_write_sysreg(CSSELR_EL1, 0);
+
+ arm_write_sysreg(CNTKCTL_EL1, 0);
+ arm_write_sysreg(CNTP_CTL_EL0, 0);
+ arm_write_sysreg(CNTP_CVAL_EL0, 0);
+ arm_write_sysreg(CNTV_CTL_EL0, 0);
+ arm_write_sysreg(CNTV_CVAL_EL0, 0);
+
+ /* AArch32 specific */
+ arm_write_sysreg(TTBCR, 0);
+ arm_write_sysreg(DACR, 0);
+ arm_write_sysreg(VBAR, 0);
+ arm_write_sysreg(DFSR, 0);
+ arm_write_sysreg(DFAR, 0);
+ arm_write_sysreg(IFSR, 0);
+ arm_write_sysreg(IFAR, 0);
+ arm_write_sysreg(ADFSR, 0);
+ arm_write_sysreg(AIFSR, 0);
+ arm_write_sysreg(MAIR0, 0);
+ arm_write_sysreg(MAIR1, 0);
+ arm_write_sysreg(AMAIR0, 0);
+ arm_write_sysreg(AMAIR1, 0);
+ arm_write_sysreg(TPIDRURW, 0);
+ arm_write_sysreg(TPIDRURO, 0);
+ arm_write_sysreg(TPIDRPRW, 0);
+}
+
static void arch_reset_self(struct per_cpu *cpu_data)
{
int err;
@@ -43,11 +109,12 @@ static void arch_reset_self(struct per_cpu *cpu_data)
else
reset_address = 0;

+ /* Restore an empty context */
+ arch_reset_el1(regs);
+
arm_write_banked_reg(ELR_hyp, reset_address);
arm_write_banked_reg(SPSR_hyp, RESET_PSR);
- memset(regs, 0, sizeof(struct registers));

- /* Restore an empty context */
vmreturn(regs);
}

diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index b2aaf06..9ed2d4e 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -32,6 +32,11 @@
#define MPIDR_EL1 SYSREG_32(0, c0, c0, 5)
#define ID_PFR0_EL1 SYSREG_32(0, c0, c1, 0)
#define ID_PFR1_EL1 SYSREG_32(0, c0, c1, 1)
+#define SCTLR_EL1 SYSREG_32(0, c1, c0, 0)
+#define ACTLR_EL1 SYSREG_32(0, c1, c0, 1)
+#define CPACR_EL1 SYSREG_32(0, c1, c0, 2)
+#define CONTEXTIDR_EL1 SYSREG_32(0, c13, c0, 1)
+#define CSSELR_EL1 SYSREG_32(2, c0, c0, 0)
#define SCTLR_EL2 SYSREG_32(4, c1, c0, 0)
#define ESR_EL2 SYSREG_32(4, c5, c2, 0)
#define TPIDR_EL2 SYSREG_32(4, c13, c0, 2)
@@ -40,15 +45,49 @@
#define VTTBR_EL2 SYSREG_64(6, c2)
#define VTCR_EL2 SYSREG_32(4, c2, c1, 2)

+#define TTBR0_EL1 SYSREG_64(0, c2)
+#define TTBR1_EL1 SYSREG_64(1, c2)
#define PAR_EL1 SYSREG_64(0, c7)

-/* AArch32-specific registers */
+#define CNTKCTL_EL1 SYSREG_32(0, c14, c1, 0)
+#define CNTP_TVAL_EL0 SYSREG_32(0, c14, c2, 0)
+#define CNTP_CTL_EL0 SYSREG_32(0, c14, c2, 1)
+#define CNTP_CVAL_EL0 SYSREG_64(2, c14)
+#define CNTV_TVAL_EL0 SYSREG_32(0, c14, c3, 0)
+#define CNTV_CTL_EL0 SYSREG_32(0, c14, c3, 1)
+#define CNTV_CVAL_EL0 SYSREG_64(3, c14)
+
+/*
+ * AArch32-specific registers: they are 64bit on AArch64, and will need some
+ * helpers if used frequently.
+ */
+#define TTBCR SYSREG_32(0, c2, c0, 2)
+#define DACR SYSREG_32(0, c3, c0, 0)
+#define VBAR SYSREG_32(0, c12, c0, 0)
#define HCR SYSREG_32(4, c1, c1, 0)
#define HCR2 SYSREG_32(4, c1, c1, 4)
#define HMAIR0 SYSREG_32(4, c10, c2, 0)
#define HMAIR1 SYSREG_32(4, c10, c2, 1)
#define HVBAR SYSREG_32(4, c12, c0, 0)

+/* Mapped to ESR, IFSR32 and FAR in AArch64 */
+#define DFSR SYSREG_32(0, c5, c0, 0)
+#define DFAR SYSREG_32(0, c6, c0, 0)
+#define IFSR SYSREG_32(0, c5, c0, 1)
+#define IFAR SYSREG_32(0, c6, c0, 2)
+#define ADFSR SYSREG_32(0, c5, c1, 0)
+#define AIFSR SYSREG_32(0, c5, c1, 1)
+
+/* Mapped to MAIR_EL1 */
+#define MAIR0 SYSREG_32(0, c10, c2, 0)
+#define MAIR1 SYSREG_32(0, c10, c2, 1)
+#define AMAIR0 SYSREG_32(0, c10, c3, 0)
+#define AMAIR1 SYSREG_32(0, c10, c3, 1)
+
+#define TPIDRURW SYSREG_32(0, c13, c0, 2)
+#define TPIDRURO SYSREG_32(0, c13, c0, 3)
+#define TPIDRPRW SYSREG_32(0, c13, c0, 4)
+
#define ATS1HR SYSREG_32(4, c7, c8, 0)

#define TLBIALL SYSREG_32(0, c8, c7, 0)
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:29 UTC
Permalink
This patch implements two cases:
- When an error occurs before setting up EL2, there is nothing much
to do except restore the linux registers stored in the per_cpu
datas.
- When it happens after EL2 setup, arch_cpu_restore copies the saved
registers on the stack, and continues into arch_shutdown_self

When it happens during the MMU setup, chances of recovering a clean
state are pretty thin anyway. The bootstrap vectors could be used to
catch and dump a minimal context (which would require a raw_printk
implementation), but we cowardly ignore this case for the moment.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/entry.S | 39 +++++++++++++++++++++-------
hypervisor/arch/arm/include/asm/setup.h | 42 ++++++++++++++++++++-----------
hypervisor/arch/arm/irqchip.c | 5 ++++
hypervisor/arch/arm/setup.c | 18 ++++++++++++-
4 files changed, 80 insertions(+), 24 deletions(-)

diff --git a/hypervisor/arch/arm/entry.S b/hypervisor/arch/arm/entry.S
index 2dd1a9a..6f9178c 100644
--- a/hypervisor/arch/arm/entry.S
+++ b/hypervisor/arch/arm/entry.S
@@ -21,25 +21,46 @@ arch_entry:
push {r0 - r12}

ldr r1, =__page_pool
- mov r2, #1
- lsl r2, #PERCPU_SIZE_SHIFT
+ mov r4, #1
+ lsl r4, #PERCPU_SIZE_SHIFT
/*
* percpu data = pool + cpuid * shift
* TODO: handle aff1 and aff2
*/
- mla r1, r2, r0, r1
- add r2, r1, #PERCPU_LINUX_SP
+ mla r1, r4, r0, r1
+ add r4, r1, #PERCPU_LINUX_SP

- /* Save SP, LR, CPSR */
- str sp, [r2], #4
- str lr, [r2], #4
+ /*
+ * Save SP, LR, CPSR
+ * r4 is used so that they can be easily retrieved on failure.
+ */
+ str sp, [r4], #4
+ str lr, [r4], #4
mrs r3, cpsr
- str r3, [r2]
+ str r3, [r4]

mov sp, r1
add sp, #PERCPU_STACK_END
+ /*
+ * Keep some space for a struct registers, in case setup fails and needs
+ * to return to the driver through the arch_shutdown_self path.
+ */
+ sub sp, #((NUM_USR_REGS + 1) * 4)
/* Call entry(cpuid, struct per_cpu*) */
- b entry
+ bl entry
+
+ /*
+ * entry only returns here when there is an error before setting up EL2
+ */
+ ldr r3, [r4], #-4
+ msr spsr, r3
+ ldr lr, [r4], #-4
+ ldr sp, [r4]
+
+ /* Keep the return value in r0 */
+ pop {r1}
+ pop {r1 - r12}
+ subs pc, lr, #0

.globl bootstrap_vectors
.align 5
diff --git a/hypervisor/arch/arm/include/asm/setup.h b/hypervisor/arch/arm/include/asm/setup.h
index ca9acaf..4a2ab6e 100644
--- a/hypervisor/arch/arm/include/asm/setup.h
+++ b/hypervisor/arch/arm/include/asm/setup.h
@@ -18,33 +18,47 @@

#ifndef __ASSEMBLY__

+#include <jailhouse/string.h>
+
static inline void __attribute__((always_inline))
-cpu_return_el1(struct per_cpu *cpu_data)
+cpu_return_el1(struct per_cpu *cpu_data, bool panic)
{
- /* Return value */
- cpu_data->linux_reg[0] = 0;
-
- asm volatile(
- /* Reset the hypervisor stack */
- "mov sp, %4\n"
+ /*
+ * Return value
+ * FIXME: there is no way, currently, to communicate the precise error
+ * number from the core. A `EDISASTER' would be appropriate here.
+ */
+ cpu_data->linux_reg[0] = (panic ? -EIO : 0);

+ asm volatile (
"msr sp_svc, %0\n"
"msr elr_hyp, %1\n"
"msr spsr_hyp, %2\n"
+ :
+ : "r" (cpu_data->linux_sp + (NUM_ENTRY_REGS * sizeof(unsigned long))),
+ "r" (cpu_data->linux_ret),
+ "r" (cpu_data->linux_flags));
+
+ if (panic) {
+ /* A panicking return needs to shutdown EL2 before the ERET. */
+ struct registers *ctx = guest_regs(cpu_data);
+ memcpy(&ctx->usr, &cpu_data->linux_reg, NUM_ENTRY_REGS);
+ return;
+ }
+
+ asm volatile(
+ /* Reset the hypervisor stack */
+ "mov sp, %0\n"
/*
* We don't care about clobbering the other registers from now on. Must
* be in sync with arch_entry.
*/
- "ldm %3, {r0 - r12}\n"
+ "ldm %1, {r0 - r12}\n"
/* After this, the kernel won't be able to access the hypervisor code */
"eret\n"
:
- : "r" (cpu_data->linux_sp + (NUM_ENTRY_REGS * sizeof(unsigned long))),
- "r" (cpu_data->linux_ret),
- "r" (cpu_data->linux_flags),
- "r" (cpu_data->linux_reg),
- "r" (cpu_data->stack + PERCPU_STACK_END)
- :);
+ : "r" (cpu_data->stack + PERCPU_STACK_END),
+ "r" (cpu_data->linux_reg));
}

int switch_exception_level(struct per_cpu *cpu_data);
diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
index b1a9a59..7f667cc 100644
--- a/hypervisor/arch/arm/irqchip.c
+++ b/hypervisor/arch/arm/irqchip.c
@@ -225,6 +225,11 @@ int irqchip_cpu_reset(struct per_cpu *cpu_data)

void irqchip_cpu_shutdown(struct per_cpu *cpu_data)
{
+ /*
+ * The GIC backend must take care of only resetting the hyp interface if
+ * it has been initialised: this function may be executed during the
+ * setup phase.
+ */
if (irqchip.cpu_reset)
irqchip.cpu_reset(cpu_data, true);
}
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index d4785b8..0006611 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -113,7 +113,7 @@ int arch_init_late(void)
void arch_cpu_activate_vmm(struct per_cpu *cpu_data)
{
/* Return to the kernel */
- cpu_return_el1(cpu_data);
+ cpu_return_el1(cpu_data, false);

while (1);
}
@@ -175,4 +175,20 @@ void arch_shutdown(void)

void arch_cpu_restore(struct per_cpu *cpu_data)
{
+ /*
+ * If we haven't reached switch_exception_level yet, there is nothing to
+ * clean up.
+ */
+ if (!is_el2())
+ return;
+
+ /*
+ * Otherwise, attempt do disable the MMU and return to EL1 using the
+ * arch_shutdown path. cpu_return will fill the banked registers and the
+ * guest regs structure (stored at the begginning of the stack) to
+ * prepare the ERET.
+ */
+ cpu_return_el1(cpu_data, true);
+
+ arch_shutdown_self(cpu_data);
}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-08-30 16:35:10 UTC
Permalink
Post by Jean-Philippe Brucker
- When an error occurs before setting up EL2, there is nothing much
to do except restore the linux registers stored in the per_cpu
datas.
- When it happens after EL2 setup, arch_cpu_restore copies the saved
registers on the stack, and continues into arch_shutdown_self
When it happens during the MMU setup, chances of recovering a clean
state are pretty thin anyway. The bootstrap vectors could be used to
catch and dump a minimal context (which would require a raw_printk
implementation), but we cowardly ignore this case for the moment.
---
hypervisor/arch/arm/entry.S | 39 +++++++++++++++++++++-------
hypervisor/arch/arm/include/asm/setup.h | 42 ++++++++++++++++++++-----------
hypervisor/arch/arm/irqchip.c | 5 ++++
hypervisor/arch/arm/setup.c | 18 ++++++++++++-
4 files changed, 80 insertions(+), 24 deletions(-)
diff --git a/hypervisor/arch/arm/entry.S b/hypervisor/arch/arm/entry.S
index 2dd1a9a..6f9178c 100644
--- a/hypervisor/arch/arm/entry.S
+++ b/hypervisor/arch/arm/entry.S
push {r0 - r12}
ldr r1, =__page_pool
- mov r2, #1
- lsl r2, #PERCPU_SIZE_SHIFT
+ mov r4, #1
+ lsl r4, #PERCPU_SIZE_SHIFT
/*
* percpu data = pool + cpuid * shift
* TODO: handle aff1 and aff2
*/
- mla r1, r2, r0, r1
- add r2, r1, #PERCPU_LINUX_SP
+ mla r1, r4, r0, r1
+ add r4, r1, #PERCPU_LINUX_SP
- /* Save SP, LR, CPSR */
- str sp, [r2], #4
- str lr, [r2], #4
+ /*
+ * Save SP, LR, CPSR
+ * r4 is used so that they can be easily retrieved on failure.
+ */
+ str sp, [r4], #4
+ str lr, [r4], #4
mrs r3, cpsr
- str r3, [r2]
+ str r3, [r4]
mov sp, r1
add sp, #PERCPU_STACK_END
+ /*
+ * Keep some space for a struct registers, in case setup fails and needs
+ * to return to the driver through the arch_shutdown_self path.
+ */
+ sub sp, #((NUM_USR_REGS + 1) * 4)
/* Call entry(cpuid, struct per_cpu*) */
- b entry
+ bl entry
+
+ /*
+ * entry only returns here when there is an error before setting up EL2
+ */
+ ldr r3, [r4], #-4
+ msr spsr, r3
+ ldr lr, [r4], #-4
+ ldr sp, [r4]
+
+ /* Keep the return value in r0 */
+ pop {r1}
+ pop {r1 - r12}
+ subs pc, lr, #0
I'm lacking architectural knowledge and can't explain why, but our Odroid
dislikes the subs here. As far as I understood code and manual, it
should restore the cpsr state saved on entry. However, Linux crashes on
return from this function, and that already if entry() just returns an
error.

This works for me:

diff --git a/hypervisor/arch/arm/entry.S b/hypervisor/arch/arm/entry.S
index 6f9178c..278c0d8 100644
--- a/hypervisor/arch/arm/entry.S
+++ b/hypervisor/arch/arm/entry.S
@@ -53,14 +53,14 @@ arch_entry:
* entry only returns here when there is an error before setting up EL2
*/
ldr r3, [r4], #-4
- msr spsr, r3
+ msr cpsr, r3
ldr lr, [r4], #-4
ldr sp, [r4]

/* Keep the return value in r0 */
pop {r1}
pop {r1 - r12}
- subs pc, lr, #0
+ bx lr

.globl bootstrap_vectors
.align 5


Is it correct? And is the crash explainable?

Jan
Marc Zyngier
2014-09-01 08:19:50 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Jan,
Post by Jan Kiszka
This patch implements two cases: - When an error occurs before
setting up EL2, there is nothing much to do except restore the
linux registers stored in the per_cpu datas. - When it happens
after EL2 setup, arch_cpu_restore copies the saved registers on
the stack, and continues into arch_shutdown_self
When it happens during the MMU setup, chances of recovering a
clean state are pretty thin anyway. The bootstrap vectors could
be used to catch and dump a minimal context (which would require
a raw_printk implementation), but we cowardly ignore this case
for the moment.
Signed-off-by: Jean-Philippe Brucker
| 39 +++++++++++++++++++++-------
hypervisor/arch/arm/include/asm/setup.h | 42
++++++++++++++++++++----------- hypervisor/arch/arm/irqchip.c
| 5 ++++ hypervisor/arch/arm/setup.c | 18
++++++++++++- 4 files changed, 80 insertions(+), 24 deletions(-)
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 2dd1a9a..6f9178c 100644 ---
a/hypervisor/arch/arm/entry.S +++ b/hypervisor/arch/arm/entry.S
@@ -21,25 +21,46 @@ arch_entry: push {r0 - r12}
ldr r1, =__page_pool - mov r2, #1 - lsl r2, #PERCPU_SIZE_SHIFT +
mov r4, #1 + lsl r4, #PERCPU_SIZE_SHIFT /* * percpu data = pool +
cpuid * shift * TODO: handle aff1 and aff2 */ - mla r1, r2, r0,
r1 - add r2, r1, #PERCPU_LINUX_SP + mla r1, r4, r0, r1 + add r4,
r1, #PERCPU_LINUX_SP
- /* Save SP, LR, CPSR */ - str sp, [r2], #4 - str lr, [r2], #4 +
/* + * Save SP, LR, CPSR + * r4 is used so that they can be
easily retrieved on failure. + */ + str sp, [r4], #4 + str lr,
[r4], #4 mrs r3, cpsr - str r3, [r2] + str r3, [r4]
mov sp, r1 add sp, #PERCPU_STACK_END + /* + * Keep some space
for a struct registers, in case setup fails and needs + * to
return to the driver through the arch_shutdown_self path. + */ +
sub sp, #((NUM_USR_REGS + 1) * 4) /* Call entry(cpuid, struct
per_cpu*) */ - b entry + bl entry + + /* + * entry only returns
here when there is an error before setting up EL2 + */ + ldr r3,
[r4], #-4 + msr spsr, r3 + ldr lr, [r4], #-4 + ldr sp, [r4] + +
/* Keep the return value in r0 */ + pop {r1} + pop {r1 - r12} +
subs pc, lr, #0
I'm lacking architectural knowledge and can't explain why, but our
Odroid dislikes the subs here. As far as I understood code and
manual, it should restore the cpsr state saved on entry. However,
Linux crashes on return from this function, and that already if
entry() just returns an error.
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 6f9178c..278c0d8 100644 ---
is an error before setting up EL2 */ ldr r3, [r4], #-4 - msr spsr,
r3 + msr cpsr, r3 ldr lr, [r4], #-4 ldr sp, [r4]
/* Keep the return value in r0 */ pop {r1} pop {r1 - r12} - subs
pc, lr, #0 + bx lr
.globl bootstrap_vectors .align 5
Is it correct? And is the crash explainable?
I'm afraid this is the wrong fix. Writing directly to CPSR is very
much discouraged when doing an exception return (you loose the barrier
semantics, returning in the guest with a potential for some of the
instructions not architecturally executed yet). Also, doing so with
the MMU on could prove "interesting" if you don't have the same
mappings between HYP and your target exception level...

The issue I can see here is that you're restoring SPSR without
specifying any flag, which could result in only some of the bits to be
restored. You want to use the "msr SPSR_cxsf, r3" idiom for a complete
restore (see ARM ARM B9.3.12).

Another problem is that I cannot see from this patch where ELR_hyp is
being set. The "subs pc, lr, #0" instruction would better be written:

msr ELR_hyp, lr
eret

ERET is the same as "subs pc, lr, #0", just more obvious. Setting the
ELR_hyp makes sure you're actually returning where you thought you were.

Let me know if that helps.

M.
- --
Jazz is not dead. It just smells funny...
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQIcBAEBAgAGBQJUBCwcAAoJECPQ0LrRPXpDTEYP+wbXIJ5fo/mR5zhUdjtvFDN8
Gf0tb7beSb/cP7fSN49+vik6nN9BkMpccKn+3VAQqyphrjqlnmEm3jKwPGtyNtjK
85hcBKlERSHU0uCPoCICumPxykgKkvMlBYxe8a+mjiTAXOYP5VHwZ7m4E0RVcBq9
sKvaif4K4aiq9w5c79jru74Q6QKv9cARtMiiweSdw16AusfzZLErYYtE54ybQfxh
0cR4tAg2rmMfWNUtR7tNHpf1LQ4egQTiZetqK7lf/7BAtljASYuOTcl8UBRuZE1e
OkzDH36zkF/dAwLdAh/9Xff7v/W3D3HogRK8JFSMprzqhJPCG420Nh9piVdBzR3r
LrM2zPcrpqnxOOa5r8xJ0AnASNJFiEl12Gk3JCYNTemTgfkjSoVqhJ1QBCVHIHcW
DGJX6PIXRKi7hEx5oboiamZMkCWKLrpLlNscO1oyclEunbpFu0VVzstzj7iyCbPC
uO1X9F0tk7FfjOaDxRDP5JQHKkjV/cWOK3KxcSacny//IWi8JmBCyKM47h/ADtDl
zWg1TfP7f+kdQmLBdngdA/F/D4+lsUio0yP4I9wriERIkT1DbNxXxYgCkvy394b+
jUpcf9q8Jhgqy1QPACyVcDM5qmefdZt6vv91Mmz2YeO1Dz5ZsjerPcAqXEfT+6vp
hQ/JrWbh5hKLVjsG0ROI
=WFiY
-----END PGP SIGNATURE-----
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-09-01 12:20:06 UTC
Permalink
Post by Marc Zyngier
Hi Jan,
Post by Jan Kiszka
This patch implements two cases: - When an error occurs before
setting up EL2, there is nothing much to do except restore the
linux registers stored in the per_cpu datas. - When it happens
after EL2 setup, arch_cpu_restore copies the saved registers on
the stack, and continues into arch_shutdown_self
When it happens during the MMU setup, chances of recovering a
clean state are pretty thin anyway. The bootstrap vectors could
be used to catch and dump a minimal context (which would require
a raw_printk implementation), but we cowardly ignore this case
for the moment.
Signed-off-by: Jean-Philippe Brucker
| 39 +++++++++++++++++++++-------
hypervisor/arch/arm/include/asm/setup.h | 42
++++++++++++++++++++----------- hypervisor/arch/arm/irqchip.c
| 5 ++++ hypervisor/arch/arm/setup.c | 18
++++++++++++- 4 files changed, 80 insertions(+), 24 deletions(-)
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 2dd1a9a..6f9178c 100644 ---
a/hypervisor/arch/arm/entry.S +++ b/hypervisor/arch/arm/entry.S
@@ -21,25 +21,46 @@ arch_entry: push {r0 - r12}
ldr r1, =__page_pool - mov r2, #1 - lsl r2, #PERCPU_SIZE_SHIFT +
mov r4, #1 + lsl r4, #PERCPU_SIZE_SHIFT /* * percpu data = pool +
cpuid * shift * TODO: handle aff1 and aff2 */ - mla r1, r2, r0,
r1 - add r2, r1, #PERCPU_LINUX_SP + mla r1, r4, r0, r1 + add r4,
r1, #PERCPU_LINUX_SP
- /* Save SP, LR, CPSR */ - str sp, [r2], #4 - str lr, [r2], #4 +
/* + * Save SP, LR, CPSR + * r4 is used so that they can be
easily retrieved on failure. + */ + str sp, [r4], #4 + str lr,
[r4], #4 mrs r3, cpsr - str r3, [r2] + str r3, [r4]
mov sp, r1 add sp, #PERCPU_STACK_END + /* + * Keep some space
for a struct registers, in case setup fails and needs + * to
return to the driver through the arch_shutdown_self path. + */ +
sub sp, #((NUM_USR_REGS + 1) * 4) /* Call entry(cpuid, struct
per_cpu*) */ - b entry + bl entry + + /* + * entry only returns
here when there is an error before setting up EL2 + */ + ldr r3,
[r4], #-4 + msr spsr, r3 + ldr lr, [r4], #-4 + ldr sp, [r4] + +
/* Keep the return value in r0 */ + pop {r1} + pop {r1 - r12} +
subs pc, lr, #0
I'm lacking architectural knowledge and can't explain why, but our
Odroid dislikes the subs here. As far as I understood code and
manual, it should restore the cpsr state saved on entry. However,
Linux crashes on return from this function, and that already if
entry() just returns an error.
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 6f9178c..278c0d8 100644 ---
is an error before setting up EL2 */ ldr r3, [r4], #-4 - msr spsr,
r3 + msr cpsr, r3 ldr lr, [r4], #-4 ldr sp, [r4]
/* Keep the return value in r0 */ pop {r1} pop {r1 - r12} - subs
pc, lr, #0 + bx lr
.globl bootstrap_vectors .align 5
Is it correct? And is the crash explainable?
I'm afraid this is the wrong fix. Writing directly to CPSR is very
much discouraged when doing an exception return (you loose the barrier
semantics, returning in the guest with a potential for some of the
instructions not architecturally executed yet). Also, doing so with
the MMU on could prove "interesting" if you don't have the same
mappings between HYP and your target exception level...
The issue I can see here is that you're restoring SPSR without
specifying any flag, which could result in only some of the bits to be
restored. You want to use the "msr SPSR_cxsf, r3" idiom for a complete
restore (see ARM ARM B9.3.12).
Will give this a try later, thanks.
Post by Marc Zyngier
Another problem is that I cannot see from this patch where ELR_hyp is
msr ELR_hyp, lr
eret
ERET is the same as "subs pc, lr, #0", just more obvious. Setting the
ELR_hyp makes sure you're actually returning where you thought you were.
To my understanding of the code, we do not execute the arch_entry tail
in hyp mode. arch_shutdown_mmu should take us back before returning to
arch_entry. That would at least be analogous to x86.

BTW, switching to hyp mode (switch_exception_level) currently leaves me
stuck on the Odroid. No more outputs, no idea right now how to debug.
Suggestions welcome.

Thanks,
Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-09-03 06:51:31 UTC
Permalink
Post by Jan Kiszka
Post by Marc Zyngier
Hi Jan,
Post by Jan Kiszka
This patch implements two cases: - When an error occurs before
setting up EL2, there is nothing much to do except restore the
linux registers stored in the per_cpu datas. - When it happens
after EL2 setup, arch_cpu_restore copies the saved registers on
the stack, and continues into arch_shutdown_self
When it happens during the MMU setup, chances of recovering a
clean state are pretty thin anyway. The bootstrap vectors could
be used to catch and dump a minimal context (which would require
a raw_printk implementation), but we cowardly ignore this case
for the moment.
Signed-off-by: Jean-Philippe Brucker
| 39 +++++++++++++++++++++-------
hypervisor/arch/arm/include/asm/setup.h | 42
++++++++++++++++++++----------- hypervisor/arch/arm/irqchip.c
| 5 ++++ hypervisor/arch/arm/setup.c | 18
++++++++++++- 4 files changed, 80 insertions(+), 24 deletions(-)
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 2dd1a9a..6f9178c 100644 ---
a/hypervisor/arch/arm/entry.S +++ b/hypervisor/arch/arm/entry.S
@@ -21,25 +21,46 @@ arch_entry: push {r0 - r12}
ldr r1, =__page_pool - mov r2, #1 - lsl r2, #PERCPU_SIZE_SHIFT +
mov r4, #1 + lsl r4, #PERCPU_SIZE_SHIFT /* * percpu data = pool +
cpuid * shift * TODO: handle aff1 and aff2 */ - mla r1, r2, r0,
r1 - add r2, r1, #PERCPU_LINUX_SP + mla r1, r4, r0, r1 + add r4,
r1, #PERCPU_LINUX_SP
- /* Save SP, LR, CPSR */ - str sp, [r2], #4 - str lr, [r2], #4 +
/* + * Save SP, LR, CPSR + * r4 is used so that they can be
easily retrieved on failure. + */ + str sp, [r4], #4 + str lr,
[r4], #4 mrs r3, cpsr - str r3, [r2] + str r3, [r4]
mov sp, r1 add sp, #PERCPU_STACK_END + /* + * Keep some space
for a struct registers, in case setup fails and needs + * to
return to the driver through the arch_shutdown_self path. + */ +
sub sp, #((NUM_USR_REGS + 1) * 4) /* Call entry(cpuid, struct
per_cpu*) */ - b entry + bl entry + + /* + * entry only returns
here when there is an error before setting up EL2 + */ + ldr r3,
[r4], #-4 + msr spsr, r3 + ldr lr, [r4], #-4 + ldr sp, [r4] + +
/* Keep the return value in r0 */ + pop {r1} + pop {r1 - r12} +
subs pc, lr, #0
I'm lacking architectural knowledge and can't explain why, but our
Odroid dislikes the subs here. As far as I understood code and
manual, it should restore the cpsr state saved on entry. However,
Linux crashes on return from this function, and that already if
entry() just returns an error.
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 6f9178c..278c0d8 100644 ---
is an error before setting up EL2 */ ldr r3, [r4], #-4 - msr spsr,
r3 + msr cpsr, r3 ldr lr, [r4], #-4 ldr sp, [r4]
/* Keep the return value in r0 */ pop {r1} pop {r1 - r12} - subs
pc, lr, #0 + bx lr
.globl bootstrap_vectors .align 5
Is it correct? And is the crash explainable?
I'm afraid this is the wrong fix. Writing directly to CPSR is very
much discouraged when doing an exception return (you loose the barrier
semantics, returning in the guest with a potential for some of the
instructions not architecturally executed yet). Also, doing so with
the MMU on could prove "interesting" if you don't have the same
mappings between HYP and your target exception level...
The issue I can see here is that you're restoring SPSR without
specifying any flag, which could result in only some of the bits to be
restored. You want to use the "msr SPSR_cxsf, r3" idiom for a complete
restore (see ARM ARM B9.3.12).
Will give this a try later, thanks.
Still crashes with this change when returning immediately from the
invoked entry() function. In fact, it also crashes when defining
arch_entry like this:

arch_entry:
mrs r0, CPSR
msr SPSR_cxsf, r0
mvn r0, #~-38
subs pc, lr, #0

(in contrast to working "mvn r0, #~-38; bx lr")

I suspect a fundamental misunderstanding of one of those instructions.

Maybe the point is that we are not returning from an exception but an
ordinary function call here if an error occurred during setup. Not sure
if the ARM version uses arch_entry's tail also for other cases, and that
is where the subs comes from.

Jan
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-09-05 13:12:52 UTC
Permalink
Post by Jan Kiszka
Post by Jan Kiszka
Post by Marc Zyngier
Hi Jan,
Post by Jan Kiszka
This patch implements two cases: - When an error occurs before
setting up EL2, there is nothing much to do except restore the
linux registers stored in the per_cpu datas. - When it happens
after EL2 setup, arch_cpu_restore copies the saved registers on
the stack, and continues into arch_shutdown_self
When it happens during the MMU setup, chances of recovering a
clean state are pretty thin anyway. The bootstrap vectors could
be used to catch and dump a minimal context (which would require
a raw_printk implementation), but we cowardly ignore this case
for the moment.
Signed-off-by: Jean-Philippe Brucker
| 39 +++++++++++++++++++++-------
hypervisor/arch/arm/include/asm/setup.h | 42
++++++++++++++++++++----------- hypervisor/arch/arm/irqchip.c
| 5 ++++ hypervisor/arch/arm/setup.c | 18
++++++++++++- 4 files changed, 80 insertions(+), 24 deletions(-)
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 2dd1a9a..6f9178c 100644 ---
a/hypervisor/arch/arm/entry.S +++ b/hypervisor/arch/arm/entry.S
@@ -21,25 +21,46 @@ arch_entry: push {r0 - r12}
ldr r1, =__page_pool - mov r2, #1 - lsl r2, #PERCPU_SIZE_SHIFT +
mov r4, #1 + lsl r4, #PERCPU_SIZE_SHIFT /* * percpu data = pool +
cpuid * shift * TODO: handle aff1 and aff2 */ - mla r1, r2, r0,
r1 - add r2, r1, #PERCPU_LINUX_SP + mla r1, r4, r0, r1 + add r4,
r1, #PERCPU_LINUX_SP
- /* Save SP, LR, CPSR */ - str sp, [r2], #4 - str lr, [r2], #4 +
/* + * Save SP, LR, CPSR + * r4 is used so that they can be
easily retrieved on failure. + */ + str sp, [r4], #4 + str lr,
[r4], #4 mrs r3, cpsr - str r3, [r2] + str r3, [r4]
mov sp, r1 add sp, #PERCPU_STACK_END + /* + * Keep some space
for a struct registers, in case setup fails and needs + * to
return to the driver through the arch_shutdown_self path. + */ +
sub sp, #((NUM_USR_REGS + 1) * 4) /* Call entry(cpuid, struct
per_cpu*) */ - b entry + bl entry + + /* + * entry only returns
here when there is an error before setting up EL2 + */ + ldr r3,
[r4], #-4 + msr spsr, r3 + ldr lr, [r4], #-4 + ldr sp, [r4] + +
/* Keep the return value in r0 */ + pop {r1} + pop {r1 - r12} +
subs pc, lr, #0
I'm lacking architectural knowledge and can't explain why, but our
Odroid dislikes the subs here. As far as I understood code and
manual, it should restore the cpsr state saved on entry. However,
Linux crashes on return from this function, and that already if
entry() just returns an error.
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 6f9178c..278c0d8 100644 ---
is an error before setting up EL2 */ ldr r3, [r4], #-4 - msr spsr,
r3 + msr cpsr, r3 ldr lr, [r4], #-4 ldr sp, [r4]
/* Keep the return value in r0 */ pop {r1} pop {r1 - r12} - subs
pc, lr, #0 + bx lr
.globl bootstrap_vectors .align 5
Is it correct? And is the crash explainable?
I'm afraid this is the wrong fix. Writing directly to CPSR is very
much discouraged when doing an exception return (you loose the barrier
semantics, returning in the guest with a potential for some of the
instructions not architecturally executed yet). Also, doing so with
the MMU on could prove "interesting" if you don't have the same
mappings between HYP and your target exception level...
The issue I can see here is that you're restoring SPSR without
specifying any flag, which could result in only some of the bits to be
restored. You want to use the "msr SPSR_cxsf, r3" idiom for a complete
restore (see ARM ARM B9.3.12).
Will give this a try later, thanks.
Still crashes with this change when returning immediately from the
invoked entry() function. In fact, it also crashes when defining
mrs r0, CPSR
msr SPSR_cxsf, r0
mvn r0, #~-38
subs pc, lr, #0
(in contrast to working "mvn r0, #~-38; bx lr")
I suspect a fundamental misunderstanding of one of those instructions.
Turned out that CONFIG_THUMB2_KERNEL was the key: Our kernel had this
enabled, and then the pattern above no longer works as expected.
Disabling it also fixed the hypervisor activation on our Odroid-XU - cool!

Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-09-08 20:17:57 UTC
Permalink
Hi Jan,

Sorry I wasn't able to answer sooner, but I haven't had any decent
Internet access for the past weeks.
Post by Jan Kiszka
Post by Jan Kiszka
Post by Jan Kiszka
Post by Marc Zyngier
Hi Jan,
Post by Jan Kiszka
This patch implements two cases: - When an error occurs before
setting up EL2, there is nothing much to do except restore the
linux registers stored in the per_cpu datas. - When it happens
after EL2 setup, arch_cpu_restore copies the saved registers on
the stack, and continues into arch_shutdown_self
When it happens during the MMU setup, chances of recovering a
clean state are pretty thin anyway. The bootstrap vectors could
be used to catch and dump a minimal context (which would require
a raw_printk implementation), but we cowardly ignore this case
for the moment.
Signed-off-by: Jean-Philippe Brucker
| 39 +++++++++++++++++++++-------
hypervisor/arch/arm/include/asm/setup.h | 42
++++++++++++++++++++----------- hypervisor/arch/arm/irqchip.c
| 5 ++++ hypervisor/arch/arm/setup.c | 18
++++++++++++- 4 files changed, 80 insertions(+), 24 deletions(-)
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 2dd1a9a..6f9178c 100644 ---
a/hypervisor/arch/arm/entry.S +++ b/hypervisor/arch/arm/entry.S
@@ -21,25 +21,46 @@ arch_entry: push {r0 - r12}
ldr r1, =__page_pool - mov r2, #1 - lsl r2, #PERCPU_SIZE_SHIFT +
mov r4, #1 + lsl r4, #PERCPU_SIZE_SHIFT /* * percpu data = pool +
cpuid * shift * TODO: handle aff1 and aff2 */ - mla r1, r2, r0,
r1 - add r2, r1, #PERCPU_LINUX_SP + mla r1, r4, r0, r1 + add r4,
r1, #PERCPU_LINUX_SP
- /* Save SP, LR, CPSR */ - str sp, [r2], #4 - str lr, [r2], #4 +
/* + * Save SP, LR, CPSR + * r4 is used so that they can be
easily retrieved on failure. + */ + str sp, [r4], #4 + str lr,
[r4], #4 mrs r3, cpsr - str r3, [r2] + str r3, [r4]
mov sp, r1 add sp, #PERCPU_STACK_END + /* + * Keep some space
for a struct registers, in case setup fails and needs + * to
return to the driver through the arch_shutdown_self path. + */ +
sub sp, #((NUM_USR_REGS + 1) * 4) /* Call entry(cpuid, struct
per_cpu*) */ - b entry + bl entry + + /* + * entry only returns
here when there is an error before setting up EL2 + */ + ldr r3,
[r4], #-4 + msr spsr, r3 + ldr lr, [r4], #-4 + ldr sp, [r4] + +
/* Keep the return value in r0 */ + pop {r1} + pop {r1 - r12} +
subs pc, lr, #0
I'm lacking architectural knowledge and can't explain why, but our
Odroid dislikes the subs here. As far as I understood code and
manual, it should restore the cpsr state saved on entry. However,
Linux crashes on return from this function, and that already if
entry() just returns an error.
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 6f9178c..278c0d8 100644 ---
is an error before setting up EL2 */ ldr r3, [r4], #-4 - msr spsr,
r3 + msr cpsr, r3 ldr lr, [r4], #-4 ldr sp, [r4]
/* Keep the return value in r0 */ pop {r1} pop {r1 - r12} - subs
pc, lr, #0 + bx lr
.globl bootstrap_vectors .align 5
Is it correct? And is the crash explainable?
I'm afraid this is the wrong fix. Writing directly to CPSR is very
much discouraged when doing an exception return (you loose the barrier
semantics, returning in the guest with a potential for some of the
instructions not architecturally executed yet). Also, doing so with
the MMU on could prove "interesting" if you don't have the same
mappings between HYP and your target exception level...
The issue I can see here is that you're restoring SPSR without
specifying any flag, which could result in only some of the bits to be
restored. You want to use the "msr SPSR_cxsf, r3" idiom for a complete
restore (see ARM ARM B9.3.12).
Will give this a try later, thanks.
Still crashes with this change when returning immediately from the
invoked entry() function. In fact, it also crashes when defining
mrs r0, CPSR
msr SPSR_cxsf, r0
mvn r0, #~-38
subs pc, lr, #0
(in contrast to working "mvn r0, #~-38; bx lr")
I suspect a fundamental misunderstanding of one of those instructions.
My assumption about this particular error is that the hypervisor is always
entered in ARM mode, since the driver jumps to an address aligned on 4
bytes. lr then contains an address whose least significant bit is 1,
which means that you will return cleanly to Thumb mode with a "bx lr".

But "eret" or "subs pc, lr, #0" ignore this interworking bit and only
exchanges the instruction set when the restored SPSR reflects it (not
entirely sure, though).
Since arch_entry never sees a Thumb bit in the CPSR (and mrs r0, CPSR
would ignore it anyway), my 'subs' is clearly wrong for restoring the
kernel bits that matter (more below), and your test code returns to the
driver in ARM mode instead of Thumb.
Post by Jan Kiszka
Turned out that CONFIG_THUMB2_KERNEL was the key: Our kernel had this
enabled, and then the pattern above no longer works as expected.
Disabling it also fixed the hypervisor activation on our Odroid-XU - cool!
That's good news! I hadn't had time to take it further than hypervisor
(de)activation and cell creation on the Odroid-XU, but I'm glad it can
at least run the root cell.

I haven't really reflected about supporting the Thumb2 instruction set
yet, but it will need some tweaks to work with both a Thumb2 kernel
and/or a Thumb2 jailhouse.bin. Thumb2 guests are already supported, so
there shouldn't be too much work. Some of the asm bits will need to be
adapted to the unified syntax.

To follow the discussion about this particular patch: the arch_entry
tail is never executed in HYP mode, but when the setup process fails
before switching to HYP. Any exception at this point would be entirely
handled by the kernel and I'm not sure why I put an exception return
here...
After reading that code again, I'm not even sure there is a need to
restore any bit of CPSR in this case: the T flag should be changed by a
'Branch and eXchange' instead of an ERET, the EAIF flags and the
execution mode shouldn't be modified by the hypervisor entry, and the
other flags aren't supposed to stay consistent.

I will start reworking this series over the next weeks. I'll try to
integrate your comments and the many changes in master, but I still
won't have any way to test them before at least one month.
I will be travelling a lot too, so I should only have Internet access
sporadically.

Cheers,
Jean-Philippe
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-09-09 05:27:02 UTC
Permalink
Hi Jean-Philippe,
Post by Marc Zyngier
Hi Jan,
Sorry I wasn't able to answer sooner, but I haven't had any decent
Internet access for the past weeks.
No problem. Good to have you available for some questions now - we
started to dig deeper into this, trying to get a stable demonstration
setup (not yet there, unfortunately). More below.
Post by Marc Zyngier
Post by Jan Kiszka
Post by Jan Kiszka
Post by Jan Kiszka
Post by Marc Zyngier
Hi Jan,
Post by Jan Kiszka
This patch implements two cases: - When an error occurs before
setting up EL2, there is nothing much to do except restore the
linux registers stored in the per_cpu datas. - When it happens
after EL2 setup, arch_cpu_restore copies the saved registers on
the stack, and continues into arch_shutdown_self
When it happens during the MMU setup, chances of recovering a
clean state are pretty thin anyway. The bootstrap vectors could
be used to catch and dump a minimal context (which would require
a raw_printk implementation), but we cowardly ignore this case
for the moment.
Signed-off-by: Jean-Philippe Brucker
| 39 +++++++++++++++++++++-------
hypervisor/arch/arm/include/asm/setup.h | 42
++++++++++++++++++++----------- hypervisor/arch/arm/irqchip.c
| 5 ++++ hypervisor/arch/arm/setup.c | 18
++++++++++++- 4 files changed, 80 insertions(+), 24 deletions(-)
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 2dd1a9a..6f9178c 100644 ---
a/hypervisor/arch/arm/entry.S +++ b/hypervisor/arch/arm/entry.S
@@ -21,25 +21,46 @@ arch_entry: push {r0 - r12}
ldr r1, =__page_pool - mov r2, #1 - lsl r2, #PERCPU_SIZE_SHIFT +
mov r4, #1 + lsl r4, #PERCPU_SIZE_SHIFT /* * percpu data = pool +
cpuid * shift * TODO: handle aff1 and aff2 */ - mla r1, r2, r0,
r1 - add r2, r1, #PERCPU_LINUX_SP + mla r1, r4, r0, r1 + add r4,
r1, #PERCPU_LINUX_SP
- /* Save SP, LR, CPSR */ - str sp, [r2], #4 - str lr, [r2], #4 +
/* + * Save SP, LR, CPSR + * r4 is used so that they can be
easily retrieved on failure. + */ + str sp, [r4], #4 + str lr,
[r4], #4 mrs r3, cpsr - str r3, [r2] + str r3, [r4]
mov sp, r1 add sp, #PERCPU_STACK_END + /* + * Keep some space
for a struct registers, in case setup fails and needs + * to
return to the driver through the arch_shutdown_self path. + */ +
sub sp, #((NUM_USR_REGS + 1) * 4) /* Call entry(cpuid, struct
per_cpu*) */ - b entry + bl entry + + /* + * entry only returns
here when there is an error before setting up EL2 + */ + ldr r3,
[r4], #-4 + msr spsr, r3 + ldr lr, [r4], #-4 + ldr sp, [r4] + +
/* Keep the return value in r0 */ + pop {r1} + pop {r1 - r12} +
subs pc, lr, #0
I'm lacking architectural knowledge and can't explain why, but our
Odroid dislikes the subs here. As far as I understood code and
manual, it should restore the cpsr state saved on entry. However,
Linux crashes on return from this function, and that already if
entry() just returns an error.
diff --git a/hypervisor/arch/arm/entry.S
b/hypervisor/arch/arm/entry.S index 6f9178c..278c0d8 100644 ---
is an error before setting up EL2 */ ldr r3, [r4], #-4 - msr spsr,
r3 + msr cpsr, r3 ldr lr, [r4], #-4 ldr sp, [r4]
/* Keep the return value in r0 */ pop {r1} pop {r1 - r12} - subs
pc, lr, #0 + bx lr
.globl bootstrap_vectors .align 5
Is it correct? And is the crash explainable?
I'm afraid this is the wrong fix. Writing directly to CPSR is very
much discouraged when doing an exception return (you loose the barrier
semantics, returning in the guest with a potential for some of the
instructions not architecturally executed yet). Also, doing so with
the MMU on could prove "interesting" if you don't have the same
mappings between HYP and your target exception level...
The issue I can see here is that you're restoring SPSR without
specifying any flag, which could result in only some of the bits to be
restored. You want to use the "msr SPSR_cxsf, r3" idiom for a complete
restore (see ARM ARM B9.3.12).
Will give this a try later, thanks.
Still crashes with this change when returning immediately from the
invoked entry() function. In fact, it also crashes when defining
mrs r0, CPSR
msr SPSR_cxsf, r0
mvn r0, #~-38
subs pc, lr, #0
(in contrast to working "mvn r0, #~-38; bx lr")
I suspect a fundamental misunderstanding of one of those instructions.
My assumption about this particular error is that the hypervisor is always
entered in ARM mode, since the driver jumps to an address aligned on 4
bytes. lr then contains an address whose least significant bit is 1,
which means that you will return cleanly to Thumb mode with a "bx lr".
But "eret" or "subs pc, lr, #0" ignore this interworking bit and only
exchanges the instruction set when the restored SPSR reflects it (not
entirely sure, though).
Since arch_entry never sees a Thumb bit in the CPSR (and mrs r0, CPSR
would ignore it anyway), my 'subs' is clearly wrong for restoring the
kernel bits that matter (more below), and your test code returns to the
driver in ARM mode instead of Thumb.
Post by Jan Kiszka
Turned out that CONFIG_THUMB2_KERNEL was the key: Our kernel had this
enabled, and then the pattern above no longer works as expected.
Disabling it also fixed the hypervisor activation on our Odroid-XU - cool!
That's good news! I hadn't had time to take it further than hypervisor
(de)activation and cell creation on the Odroid-XU, but I'm glad it can
at least run the root cell.
I was also able to start (and stop) the ported UART

However, we have serious troubles with the kernels you can get for the
platform (a shame...). Disabling THUMB2_KERNEL leaves us with a very
unstable 3.14-hardkernel that already crashes on a plain "find /". Hans
(Johann) is trying to analyze this right now.

Mainline kernels are not yet providing what we need, specifically not
USB support, thus Ethernet. Already talked to Andreas Faerber who is
trying to improve this but can only make slow progress.

Unfortunately, the rather stable 3.4-android kernel is not working with
Jailhouse. Not sure if it's only that hypervisor-based cluster switcher
or more.
Post by Marc Zyngier
I haven't really reflected about supporting the Thumb2 instruction set
yet, but it will need some tweaks to work with both a Thumb2 kernel
and/or a Thumb2 jailhouse.bin. Thumb2 guests are already supported, so
there shouldn't be too much work. Some of the asm bits will need to be
adapted to the unified syntax.
That is our current plan B: convert Jailhouse to Thumb2 support. What is
the more common config variant on our target systems these days, Thumb2
on? Then it probably makes sense to support it, maybe even require it
(to make maintenance simpler).

What has to be changes besides the arch_entry code and hvc encoding?
Post by Marc Zyngier
To follow the discussion about this particular patch: the arch_entry
tail is never executed in HYP mode, but when the setup process fails
before switching to HYP. Any exception at this point would be entirely
handled by the kernel and I'm not sure why I put an exception return
here...
After reading that code again, I'm not even sure there is a need to
restore any bit of CPSR in this case: the T flag should be changed by a
'Branch and eXchange' instead of an ERET, the EAIF flags and the
execution mode shouldn't be modified by the hypervisor entry, and the
other flags aren't supposed to stay consistent.
OK... Then we should indeed simplify the arch_entry return path.
Post by Marc Zyngier
I will start reworking this series over the next weeks. I'll try to
integrate your comments and the many changes in master, but I still
won't have any way to test them before at least one month.
I will be travelling a lot too, so I should only have Internet access
sporadically.
I think your review of changes done by us would already be very helpful!
Any support on rebasing as well, for sure. For us a stable version has
priority at the moment, then cleanups. But we can always organize to run
tests on our Odroid, for sure.

Thanks for your support!
Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-10-06 14:32:04 UTC
Permalink
Post by Jan Kiszka
[...]
However, we have serious troubles with the kernels you can get for the
platform (a shame...). Disabling THUMB2_KERNEL leaves us with a very
unstable 3.14-hardkernel that already crashes on a plain "find /". Hans
(Johann) is trying to analyze this right now.
Mainline kernels are not yet providing what we need, specifically not
USB support, thus Ethernet. Already talked to Andreas Faerber who is
trying to improve this but can only make slow progress.
Unfortunately, the rather stable 3.4-android kernel is not working with
Jailhouse. Not sure if it's only that hypervisor-based cluster switcher
or more.
Post by Jean-Philippe Brucker
I haven't really reflected about supporting the Thumb2 instruction set
yet, but it will need some tweaks to work with both a Thumb2 kernel
and/or a Thumb2 jailhouse.bin. Thumb2 guests are already supported, so
there shouldn't be too much work. Some of the asm bits will need to be
adapted to the unified syntax.
That is our current plan B: convert Jailhouse to Thumb2 support. What is
the more common config variant on our target systems these days, Thumb2
on? Then it probably makes sense to support it, maybe even require it
(to make maintenance simpler).
What has to be changes besides the arch_entry code and hvc encoding?
What do you mean by 'hvc encoding' ? GCC should generate the right
encoding for the driver's HVCs, if kconfig contains THUMB2_KERNEL.

Supporting a Thumb2 kernel shouldn't require much work:

- On setup, the driver (enter_hypervisor) does a blx into Jailhouse's
entry. (At least it should: I have no idea what ensures that the
compiler generates a blx rather than a simple bl.)
Anyway, jailhouse is entered in ARM mode since the address is a
multiple of 4, and the return address, which contains information
about the caller's instruction set in bit 0, is saved into lr.

When the setup succeeds, cpu_return_el1 restores the saved CPSR by
putting it into SPSR_hyp and the return address by putting it into
ELR_hyp. The final 'eret' will use those values to return to the kernel.
*However*, the target instruction set will need to be copied manually
from the return address into the target CPSR.T, since --I think-- it
would be ignored otherwise:
SPSR_hyp := cpudata->linux_flags | ((cpudata->linux_ret & 1) << 5)

When the setup fails, the return path is the one described previously
in this thread: we are still at kernel level, and a 'bx' (instead of my
faulty 'subs') should return to the driver in thumb mode.

- When an instruction is trapped to EL2, HSCTLR.TE contains the
hypervisor's instruction set, and SPSR_hyp contains the kernel one.
Except when injecting an abort, the trap handlers don't modify
SPSR_hyp.T, which means that Thumb2 guests are already supported.

- On hypervisor shutdown, the root CPUs will execute the whole
shutdown like a simple trap, and restore the kernel context in the
right instruction set on the final eret.
All non-root CPUs are reset and will re-enter the kernel's secondary
entry in ARM mode, as expected.

So to sum up, the changes I see for the moment are the replacement of
'subs pc, lr, #0' by 'bx lr', and the modification of cpu_return_el1.
There must be quite a few pitfalls that I'm forgetting, but I would need
a debugger to see them.

Supporting a Thumb2 hypervisor image is a bit more complicated, but I
think that's not what you want for now. For the record, it would require
at least the following changes:
- let the entry code (always executed in ARM mode) switch to Thumb if
necessary, by adding THUMB() and ARM() macros as Linux's head.S does,
- set HSCTLR.TE, to take traps in Thumb mode,
- unify all the assembly code to be valid in both ARM and Thumb2. Mostly
caches.S, I think.

Cheers,
Jean-Philippe
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-10-06 14:39:59 UTC
Permalink
Post by Jean-Philippe Brucker
Post by Jan Kiszka
[...]
However, we have serious troubles with the kernels you can get for the
platform (a shame...). Disabling THUMB2_KERNEL leaves us with a very
unstable 3.14-hardkernel that already crashes on a plain "find /". Hans
(Johann) is trying to analyze this right now.
Mainline kernels are not yet providing what we need, specifically not
USB support, thus Ethernet. Already talked to Andreas Faerber who is
trying to improve this but can only make slow progress.
Unfortunately, the rather stable 3.4-android kernel is not working with
Jailhouse. Not sure if it's only that hypervisor-based cluster switcher
or more.
Post by Jean-Philippe Brucker
I haven't really reflected about supporting the Thumb2 instruction set
yet, but it will need some tweaks to work with both a Thumb2 kernel
and/or a Thumb2 jailhouse.bin. Thumb2 guests are already supported, so
there shouldn't be too much work. Some of the asm bits will need to be
adapted to the unified syntax.
That is our current plan B: convert Jailhouse to Thumb2 support. What is
the more common config variant on our target systems these days, Thumb2
on? Then it probably makes sense to support it, maybe even require it
(to make maintenance simpler).
What has to be changes besides the arch_entry code and hvc encoding?
What do you mean by 'hvc encoding' ? GCC should generate the right
encoding for the driver's HVCs, if kconfig contains THUMB2_KERNEL.
I was talking about the hvc triggered by the hypervisor in order to take
over hyp mode. That one is not wrapped. But I also decoupled the
hypervisor from any direct kconfig dependencies, so it wouldn't help on
the long run to use the kernel's hvc wrapper.
Post by Jean-Philippe Brucker
- On setup, the driver (enter_hypervisor) does a blx into Jailhouse's
entry. (At least it should: I have no idea what ensures that the
compiler generates a blx rather than a simple bl.)
Anyway, jailhouse is entered in ARM mode since the address is a
multiple of 4, and the return address, which contains information
about the caller's instruction set in bit 0, is saved into lr.
When the setup succeeds, cpu_return_el1 restores the saved CPSR by
putting it into SPSR_hyp and the return address by putting it into
ELR_hyp. The final 'eret' will use those values to return to the kernel.
*However*, the target instruction set will need to be copied manually
from the return address into the target CPSR.T, since --I think-- it
SPSR_hyp := cpudata->linux_flags | ((cpudata->linux_ret & 1) << 5)
When the setup fails, the return path is the one described previously
in this thread: we are still at kernel level, and a 'bx' (instead of my
faulty 'subs') should return to the driver in thumb mode.
- When an instruction is trapped to EL2, HSCTLR.TE contains the
hypervisor's instruction set, and SPSR_hyp contains the kernel one.
Except when injecting an abort, the trap handlers don't modify
SPSR_hyp.T, which means that Thumb2 guests are already supported.
- On hypervisor shutdown, the root CPUs will execute the whole
shutdown like a simple trap, and restore the kernel context in the
right instruction set on the final eret.
All non-root CPUs are reset and will re-enter the kernel's secondary
entry in ARM mode, as expected.
So to sum up, the changes I see for the moment are the replacement of
'subs pc, lr, #0' by 'bx lr', and the modification of cpu_return_el1.
There must be quite a few pitfalls that I'm forgetting, but I would need
a debugger to see them.
Sounds good.
Post by Jean-Philippe Brucker
Supporting a Thumb2 hypervisor image is a bit more complicated, but I
think that's not what you want for now. For the record, it would require
- let the entry code (always executed in ARM mode) switch to Thumb if
necessary, by adding THUMB() and ARM() macros as Linux's head.S does,
- set HSCTLR.TE, to take traps in Thumb mode,
- unify all the assembly code to be valid in both ARM and Thumb2. Mostly
caches.S, I think.
What would be the added-value for the hypervisor to run in Thumb2 mode?

Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-10-07 19:43:28 UTC
Permalink
Post by Jan Kiszka
Post by Jean-Philippe Brucker
Post by Jan Kiszka
[...]
However, we have serious troubles with the kernels you can get for the
platform (a shame...). Disabling THUMB2_KERNEL leaves us with a very
unstable 3.14-hardkernel that already crashes on a plain "find /". Hans
(Johann) is trying to analyze this right now.
Mainline kernels are not yet providing what we need, specifically not
USB support, thus Ethernet. Already talked to Andreas Faerber who is
trying to improve this but can only make slow progress.
Unfortunately, the rather stable 3.4-android kernel is not working with
Jailhouse. Not sure if it's only that hypervisor-based cluster switcher
or more.
Post by Jean-Philippe Brucker
I haven't really reflected about supporting the Thumb2 instruction set
yet, but it will need some tweaks to work with both a Thumb2 kernel
and/or a Thumb2 jailhouse.bin. Thumb2 guests are already supported, so
there shouldn't be too much work. Some of the asm bits will need to be
adapted to the unified syntax.
That is our current plan B: convert Jailhouse to Thumb2 support. What is
the more common config variant on our target systems these days, Thumb2
on? Then it probably makes sense to support it, maybe even require it
(to make maintenance simpler).
What has to be changes besides the arch_entry code and hvc encoding?
What do you mean by 'hvc encoding' ? GCC should generate the right
encoding for the driver's HVCs, if kconfig contains THUMB2_KERNEL.
I was talking about the hvc triggered by the hypervisor in order to take
over hyp mode. That one is not wrapped. But I also decoupled the
hypervisor from any direct kconfig dependencies, so it wouldn't help on
the long run to use the kernel's hvc wrapper.
Post by Jean-Philippe Brucker
- On setup, the driver (enter_hypervisor) does a blx into Jailhouse's
entry. (At least it should: I have no idea what ensures that the
compiler generates a blx rather than a simple bl.)
Anyway, jailhouse is entered in ARM mode since the address is a
multiple of 4, and the return address, which contains information
about the caller's instruction set in bit 0, is saved into lr.
When the setup succeeds, cpu_return_el1 restores the saved CPSR by
putting it into SPSR_hyp and the return address by putting it into
ELR_hyp. The final 'eret' will use those values to return to the kernel.
*However*, the target instruction set will need to be copied manually
from the return address into the target CPSR.T, since --I think-- it
SPSR_hyp := cpudata->linux_flags | ((cpudata->linux_ret & 1) << 5)
When the setup fails, the return path is the one described previously
in this thread: we are still at kernel level, and a 'bx' (instead of my
faulty 'subs') should return to the driver in thumb mode.
- When an instruction is trapped to EL2, HSCTLR.TE contains the
hypervisor's instruction set, and SPSR_hyp contains the kernel one.
Except when injecting an abort, the trap handlers don't modify
SPSR_hyp.T, which means that Thumb2 guests are already supported.
- On hypervisor shutdown, the root CPUs will execute the whole
shutdown like a simple trap, and restore the kernel context in the
right instruction set on the final eret.
All non-root CPUs are reset and will re-enter the kernel's secondary
entry in ARM mode, as expected.
So to sum up, the changes I see for the moment are the replacement of
'subs pc, lr, #0' by 'bx lr', and the modification of cpu_return_el1.
There must be quite a few pitfalls that I'm forgetting, but I would need
a debugger to see them.
Sounds good.
Post by Jean-Philippe Brucker
Supporting a Thumb2 hypervisor image is a bit more complicated, but I
think that's not what you want for now. For the record, it would require
- let the entry code (always executed in ARM mode) switch to Thumb if
necessary, by adding THUMB() and ARM() macros as Linux's head.S does,
- set HSCTLR.TE, to take traps in Thumb mode,
- unify all the assembly code to be valid in both ARM and Thumb2. Mostly
caches.S, I think.
What would be the added-value for the hypervisor to run in Thumb2 mode?
Only size optimisation, I think. Thumb2 is a mix of 16 and 32bit
instructions:

$ wc -c jailhouse*
26616 jailhouse-arm.bin
18392 jailhouse-thumb.bin

In theory, performances should be about the same, but I've never done
any benchmarking.

In other news, I spent the day trying to use wip/arm on the chromebook
and set up the serial. But I don't have the right tools for this kind of
soldering here, so I ended up destroying my motherboard :( I guess I'll
try something else once I'm done mourning.
I've not seen any issue with your rebases, for the moment.

Thanks,
Jean-Philippe
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-10-08 06:03:55 UTC
Permalink
Post by Jean-Philippe Brucker
Post by Jan Kiszka
Post by Jean-Philippe Brucker
Post by Jan Kiszka
[...]
However, we have serious troubles with the kernels you can get for the
platform (a shame...). Disabling THUMB2_KERNEL leaves us with a very
unstable 3.14-hardkernel that already crashes on a plain "find /". Hans
(Johann) is trying to analyze this right now.
Mainline kernels are not yet providing what we need, specifically not
USB support, thus Ethernet. Already talked to Andreas Faerber who is
trying to improve this but can only make slow progress.
Unfortunately, the rather stable 3.4-android kernel is not working with
Jailhouse. Not sure if it's only that hypervisor-based cluster switcher
or more.
Post by Jean-Philippe Brucker
I haven't really reflected about supporting the Thumb2 instruction set
yet, but it will need some tweaks to work with both a Thumb2 kernel
and/or a Thumb2 jailhouse.bin. Thumb2 guests are already supported, so
there shouldn't be too much work. Some of the asm bits will need to be
adapted to the unified syntax.
That is our current plan B: convert Jailhouse to Thumb2 support. What is
the more common config variant on our target systems these days, Thumb2
on? Then it probably makes sense to support it, maybe even require it
(to make maintenance simpler).
What has to be changes besides the arch_entry code and hvc encoding?
What do you mean by 'hvc encoding' ? GCC should generate the right
encoding for the driver's HVCs, if kconfig contains THUMB2_KERNEL.
I was talking about the hvc triggered by the hypervisor in order to take
over hyp mode. That one is not wrapped. But I also decoupled the
hypervisor from any direct kconfig dependencies, so it wouldn't help on
the long run to use the kernel's hvc wrapper.
Post by Jean-Philippe Brucker
- On setup, the driver (enter_hypervisor) does a blx into Jailhouse's
entry. (At least it should: I have no idea what ensures that the
compiler generates a blx rather than a simple bl.)
Anyway, jailhouse is entered in ARM mode since the address is a
multiple of 4, and the return address, which contains information
about the caller's instruction set in bit 0, is saved into lr.
When the setup succeeds, cpu_return_el1 restores the saved CPSR by
putting it into SPSR_hyp and the return address by putting it into
ELR_hyp. The final 'eret' will use those values to return to the kernel.
*However*, the target instruction set will need to be copied manually
from the return address into the target CPSR.T, since --I think-- it
SPSR_hyp := cpudata->linux_flags | ((cpudata->linux_ret & 1) << 5)
When the setup fails, the return path is the one described previously
in this thread: we are still at kernel level, and a 'bx' (instead of my
faulty 'subs') should return to the driver in thumb mode.
- When an instruction is trapped to EL2, HSCTLR.TE contains the
hypervisor's instruction set, and SPSR_hyp contains the kernel one.
Except when injecting an abort, the trap handlers don't modify
SPSR_hyp.T, which means that Thumb2 guests are already supported.
- On hypervisor shutdown, the root CPUs will execute the whole
shutdown like a simple trap, and restore the kernel context in the
right instruction set on the final eret.
All non-root CPUs are reset and will re-enter the kernel's secondary
entry in ARM mode, as expected.
So to sum up, the changes I see for the moment are the replacement of
'subs pc, lr, #0' by 'bx lr', and the modification of cpu_return_el1.
There must be quite a few pitfalls that I'm forgetting, but I would need
a debugger to see them.
Sounds good.
Post by Jean-Philippe Brucker
Supporting a Thumb2 hypervisor image is a bit more complicated, but I
think that's not what you want for now. For the record, it would require
- let the entry code (always executed in ARM mode) switch to Thumb if
necessary, by adding THUMB() and ARM() macros as Linux's head.S does,
- set HSCTLR.TE, to take traps in Thumb mode,
- unify all the assembly code to be valid in both ARM and Thumb2. Mostly
caches.S, I think.
What would be the added-value for the hypervisor to run in Thumb2 mode?
Only size optimisation, I think. Thumb2 is a mix of 16 and 32bit
$ wc -c jailhouse*
26616 jailhouse-arm.bin
18392 jailhouse-thumb.bin
In theory, performances should be about the same, but I've never done
any benchmarking.
Ok, so no urgent need. We can reevaluate this once we have a stable
environment with sensitive test cases.
Post by Jean-Philippe Brucker
In other news, I spent the day trying to use wip/arm on the chromebook
and set up the serial. But I don't have the right tools for this kind of
soldering here, so I ended up destroying my motherboard :( I guess I'll
try something else once I'm done mourning.
Sorry to hear this :-/
Post by Jean-Philippe Brucker
I've not seen any issue with your rebases, for the moment.
Great! A bit depending on how we progress with the AMD64 integration, I
will line the current ARM bits up soon.

Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:22 UTC
Permalink
This patch adds the handling of MMIO accesses to the GICv3 distributor.
By restricting the SPI masks to the cell's configuration, it makes sure
that they do not touch the other cell's SPI's when writing the common
registers.
Except for the routing and SGIR registers, most of the code should be
common to both GICv2 and v3.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 2 +-
hypervisor/arch/arm/gic-common.c | 205 ++++++++++++++++++++++++++
hypervisor/arch/arm/gic-v3.c | 6 +-
hypervisor/arch/arm/include/asm/gic_common.h | 9 ++
hypervisor/arch/arm/include/asm/irqchip.h | 13 ++
5 files changed, 232 insertions(+), 3 deletions(-)
create mode 100644 hypervisor/arch/arm/gic-common.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 1b0a59c..472e224 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -18,7 +18,7 @@ obj-y := entry.o dbg-write.o exception.o setup.o control.o lib.o
obj-y += traps.o mmio.o
obj-y += paging.o mmu_hyp.o mmu_cell.o caches.o
obj-y += psci.o psci_low.o spin.o
-obj-y += irqchip.o
+obj-y += irqchip.o gic-common.o
obj-$(CONFIG_ARM_GIC_V3) += gic-v3.o
obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o

diff --git a/hypervisor/arch/arm/gic-common.c b/hypervisor/arch/arm/gic-common.c
new file mode 100644
index 0000000..dcde88e
--- /dev/null
+++ b/hypervisor/arch/arm/gic-common.c
@@ -0,0 +1,205 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/cell.h>
+#include <asm/gic_common.h>
+#include <asm/io.h>
+#include <asm/irqchip.h>
+#include <asm/percpu.h>
+#include <asm/platform.h>
+#include <asm/spinlock.h>
+#include <asm/traps.h>
+#include <jailhouse/control.h>
+
+#define REG_RANGE(base, n, size) \
+ (base) ... ((base) + (n - 1) * (size))
+
+extern void *gicd_base;
+extern unsigned int gicd_size;
+
+static DEFINE_SPINLOCK(dist_lock);
+
+/*
+ * Most of the GIC distributor writes only reconfigure the IRQs corresponding to
+ * the bits of the written value, by using separate `set' and `clear' registers.
+ * Such registers can be handled by setting the `is_poke' boolean, which allows
+ * to simply restrict the access->val with the cell configuration mask.
+ * Others, such as the priority registers, will need to be read and written back
+ * with a restricted value, by using the distributor lock.
+ */
+static int restrict_bitmask_access(struct per_cpu *cpu_data,
+ struct mmio_access *access,
+ unsigned int reg_index,
+ unsigned int bits_per_irq,
+ bool is_poke)
+{
+ unsigned int spi;
+ unsigned long access_mask = 0;
+ /*
+ * In order to avoid division, the number of bits per irq is limited
+ * to powers of 2 for the moment.
+ */
+ unsigned long irqs_per_reg = 32 >> ffsl(bits_per_irq);
+ unsigned long spi_bits = (1 << bits_per_irq) - 1;
+ /* First, extract the first interrupt affected by this access */
+ unsigned int first_irq = reg_index * irqs_per_reg;
+
+ /* For SGIs or PPIs, let the caller do the mmio access */
+ if (!is_spi(first_irq))
+ return TRAP_UNHANDLED;
+
+ /* For SPIs, compare against the cell config mask */
+ first_irq -= 32;
+ for (spi = first_irq; spi < first_irq + irqs_per_reg; spi++) {
+ unsigned int bit_nr = (spi - first_irq) * bits_per_irq;
+ if (spi_in_cell(cpu_data->cell, spi))
+ access_mask |= spi_bits << bit_nr;
+ }
+
+ if (!access->is_write) {
+ /* Restrict the read value */
+ arch_mmio_access(access);
+ access->val &= access_mask;
+ return TRAP_HANDLED;
+ }
+
+ if (!is_poke) {
+ /*
+ * Modify the existing value of this register by first reading
+ * it into access->val
+ * Relies on a spinlock since we need two mmio accesses.
+ */
+ unsigned long access_val = access->val;
+
+ spin_lock(&dist_lock);
+
+ access->is_write = false;
+ arch_mmio_access(access);
+ access->is_write = true;
+
+ /* Clear 0 bits */
+ access->val &= ~(access_mask & ~access_val);
+ access->val |= access_val;
+ arch_mmio_access(access);
+
+ spin_unlock(&dist_lock);
+
+ return TRAP_HANDLED;
+ } else {
+ access->val &= access_mask;
+ /* Do the access */
+ return TRAP_UNHANDLED;
+ }
+}
+
+/*
+ * GICv3 uses a 64bit register IROUTER for each IRQ
+ */
+static int handle_irq_route(struct per_cpu *cpu_data,
+ struct mmio_access *access,
+ unsigned int irq)
+{
+ struct cell *cell = cpu_data->cell;
+ unsigned int cpu;
+
+ /* Ignore aff3 on AArch32 (return 0) */
+ if (access->size == 4 && (access->addr % 8))
+ return TRAP_HANDLED;
+
+ /* SGIs and PPIs are res0 */
+ if (!is_spi(irq))
+ return TRAP_HANDLED;
+
+ /*
+ * Ignore accesses to SPIs that do not belong to the cell. This isn't
+ * forbidden, because the guest driver may simply iterate over all
+ * registers at initialisation
+ */
+ if (!spi_in_cell(cell, irq - 32))
+ return TRAP_HANDLED;
+
+ /* Translate the virtual cpu id into the physical one */
+ if (access->is_write) {
+ access->val = cpu_virt2phys(cell, access->val);
+ if (access->val == -1) {
+ printk("Attempt to route IRQ%d outside of cell\n", irq);
+ return TRAP_FORBIDDEN;
+ }
+ /* And do the access */
+ return TRAP_UNHANDLED;
+ } else {
+ cpu = readl_relaxed(gicd_base + GICD_IROUTER + 8 * irq);
+ access->val = cpu_phys2virt(cpu);
+ return TRAP_HANDLED;
+ }
+}
+
+int gic_handle_dist_access(struct per_cpu *cpu_data,
+ struct mmio_access *access)
+{
+ int ret;
+ unsigned long reg = access->addr - (unsigned long)gicd_base;
+
+ switch (reg) {
+ case REG_RANGE(GICD_IROUTER, 1024, 8):
+ ret = handle_irq_route(cpu_data, access,
+ (reg - GICD_IROUTER) / 8);
+ break;
+
+ case REG_RANGE(GICD_ICENABLER, 32, 4):
+ case REG_RANGE(GICD_ISENABLER, 32, 4):
+ case REG_RANGE(GICD_ICPENDR, 32, 4):
+ case REG_RANGE(GICD_ISPENDR, 32, 4):
+ case REG_RANGE(GICD_ICACTIVER, 32, 4):
+ case REG_RANGE(GICD_ISACTIVER, 32, 4):
+ ret = restrict_bitmask_access(cpu_data, access,
+ (reg & 0x7f) / 4, 1, true);
+ break;
+
+ case REG_RANGE(GICD_IGROUPR, 32, 4):
+ ret = restrict_bitmask_access(cpu_data, access,
+ (reg & 0x7f) / 4, 1, false);
+ break;
+
+ case REG_RANGE(GICD_ICFGR, 64, 4):
+ ret = restrict_bitmask_access(cpu_data, access,
+ (reg & 0xff) / 4, 2, false);
+ break;
+
+ case REG_RANGE(GICD_IPRIORITYR, 255, 4):
+ ret = restrict_bitmask_access(cpu_data, access,
+ (reg & 0x3ff) / 4, 8, false);
+ break;
+
+ case GICD_CTLR:
+ case GICD_TYPER:
+ case GICD_IIDR:
+ case REG_RANGE(GICD_PIDR0, 4, 4):
+ case REG_RANGE(GICD_PIDR4, 4, 4):
+ case REG_RANGE(GICD_CIDR0, 4, 4):
+ /* Allow read access, ignore write */
+ ret = (access->is_write ? TRAP_HANDLED : TRAP_UNHANDLED);
+ break;
+
+ default:
+ /* Ignore access. */
+ ret = TRAP_HANDLED;
+ }
+
+ /* The sub-handlers return TRAP_UNHANDLED to allow the access */
+ if (ret == TRAP_UNHANDLED) {
+ arch_mmio_access(access);
+ ret = TRAP_HANDLED;
+ }
+
+ return ret;
+}
diff --git a/hypervisor/arch/arm/gic-v3.c b/hypervisor/arch/arm/gic-v3.c
index f6b940c..d3cc678 100644
--- a/hypervisor/arch/arm/gic-v3.c
+++ b/hypervisor/arch/arm/gic-v3.c
@@ -166,7 +166,6 @@ static int gic_cpu_init(struct per_cpu *cpu_data)
static void gic_route_spis(struct cell *config_cell, struct cell *dest_cell)
{
int i;
- u64 spis = config_cell->arch.spis;
void *irouter = gicd_base + GICD_IROUTER;
unsigned int first_cpu;

@@ -175,7 +174,7 @@ static void gic_route_spis(struct cell *config_cell, struct cell *dest_cell)
break;

for (i = 0; i < 64; i++, irouter += 8) {
- if (test_bit(i, (unsigned long *)&spis))
+ if (spi_in_cell(config_cell, i))
writeq_relaxed(first_cpu, irouter);
}
}
@@ -436,6 +435,9 @@ static int gic_mmio_access(struct per_cpu *cpu_data, struct mmio_access *access)
{
void *address = (void *)access->addr;

+ if (address >= gicd_base && address < gicd_base + gicd_size)
+ return gic_handle_dist_access(cpu_data, access);
+
if (address >= gicr_base && address < gicr_base + gicr_size)
return gic_handle_redist_access(cpu_data, access);

diff --git a/hypervisor/arch/arm/include/asm/gic_common.h b/hypervisor/arch/arm/include/asm/gic_common.h
index d2ff6ac..dc25279 100644
--- a/hypervisor/arch/arm/include/asm/gic_common.h
+++ b/hypervisor/arch/arm/include/asm/gic_common.h
@@ -40,4 +40,13 @@
#define is_ppi(irqn) ((irqn) > 15 && (irqn) < 32)
#define is_spi(irqn) ((irqn) > 31 && (irqn) < 1020)

+#ifndef __ASSEMBLY__
+
+struct mmio_access;
+struct per_cpu;
+
+int gic_handle_dist_access(struct per_cpu *cpu_data,
+ struct mmio_access *access);
+
+#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_GIC_COMMON_H */
diff --git a/hypervisor/arch/arm/include/asm/irqchip.h b/hypervisor/arch/arm/include/asm/irqchip.h
index c2c34b7..4a985aa 100644
--- a/hypervisor/arch/arm/include/asm/irqchip.h
+++ b/hypervisor/arch/arm/include/asm/irqchip.h
@@ -98,5 +98,18 @@ int irqchip_insert_pending(struct per_cpu *cpu_data, struct pending_irq *irq);
int irqchip_remove_pending(struct per_cpu *cpu_data, struct pending_irq *irq);
int irqchip_set_pending(struct per_cpu *cpu_data, u32 irq_id, bool try_inject);

+static inline bool spi_in_cell(struct cell *cell, unsigned int spi)
+{
+ /* FIXME: Change the configuration to a bitmask range */
+ u64 spi_mask;
+
+ if (spi > 64)
+ return false;
+
+ spi_mask = cell->arch.spis;
+
+ return spi_mask & (1 << spi);
+}
+
#endif /* __ASSEMBLY__ */
#endif /* _JAILHOUSE_ASM_IRQCHIP_H */
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:23 UTC
Permalink
By adding a new field 'guest_mbox' in the cpu_datas, this patch allows
the guests to issue PSCI HVC calls. Currently, only PSCI_CPU_ON,
PSCI_CPU_OFF, and PSCI_VERSION are handled.
A call to CPU_OFF enters the suspend mode through arch_reset_self. When
a core calls CPU_ON, the hypervisor wakes up the other core, which will
take its return address from the guest_mbox, wipe its registers and go
back to EL1. The context argument to PSCI_CPU_ON is currently ignored,
since the whole core is reset.
This patch also traps SMC instructions in order to catch the PSCI
requests done using this way. All others SMC calls are forwarded to
EL3.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 2 +-
hypervisor/arch/arm/include/asm/control.h | 1 +
hypervisor/arch/arm/include/asm/percpu.h | 1 +
hypervisor/arch/arm/include/asm/processor.h | 2 +
hypervisor/arch/arm/include/asm/psci.h | 7 +++
hypervisor/arch/arm/psci.c | 74 +++++++++++++++++++++++++++
hypervisor/arch/arm/psci_low.S | 11 ++++
hypervisor/arch/arm/setup.c | 3 +-
hypervisor/arch/arm/traps.c | 21 +++++++-
9 files changed, 119 insertions(+), 3 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 0fcdba1..f223ae8 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -89,7 +89,7 @@ static void arch_reset_el1(struct registers *regs)
arm_write_sysreg(TPIDRPRW, 0);
}

-static void arch_reset_self(struct per_cpu *cpu_data)
+void arch_reset_self(struct per_cpu *cpu_data)
{
int err;
unsigned long reset_address;
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index bb97ff3..10a46c2 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -37,6 +37,7 @@ int arch_spin_init(void);
unsigned long arch_cpu_spin(void);
struct registers* arch_handle_exit(struct per_cpu *cpu_data,
struct registers *regs);
+void arch_reset_self(struct per_cpu *cpu_data);

void __attribute__((noreturn)) vmreturn(struct registers *guest_regs);

diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index 3f67ed4..69873b5 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -58,6 +58,7 @@ struct per_cpu {

/* The mbox will be accessed with a ldrd, which requires alignment */
__attribute__((aligned(8))) struct psci_mbox psci_mbox;
+ struct psci_mbox guest_mbox;

bool cpu_stopped;
bool cell_pages_dirty;
diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index fd0e1af..19872d1 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -171,6 +171,8 @@ struct registers {
#define wfi() asm volatile("wfi\n")
#define sev() asm volatile("sev\n")

+unsigned int smc(unsigned int r0, ...);
+
static inline void cpu_relax(void)
{
asm volatile("" : : : "memory");
diff --git a/hypervisor/arch/arm/include/asm/psci.h b/hypervisor/arch/arm/include/asm/psci.h
index 1883a6d..391574f 100644
--- a/hypervisor/arch/arm/include/asm/psci.h
+++ b/hypervisor/arch/arm/include/asm/psci.h
@@ -39,6 +39,8 @@
#define PSCI_NOT_PRESENT (-7)
#define PSCI_DISABLED (-8)

+#define IS_PSCI_FN(hvc) ((((hvc) >> 24) & 0x84) == 0x84)
+
#define PSCI_INVALID_ADDRESS 0xffffffff

#ifndef __ASSEMBLY__
@@ -60,5 +62,10 @@ void psci_suspend(struct per_cpu *cpu_data);
long psci_resume(unsigned int target);
long psci_try_resume(unsigned int cpu_id);

+long psci_dispatch(struct per_cpu *cpu_data, struct trap_context *ctx);
+
+int psci_cell_init(struct cell *cell);
+unsigned long psci_emulate_spin(struct per_cpu *cpu_data);
+
#endif /* !__ASSEMBLY__ */
#endif /* _JAILHOUSE_ASM_PSCI_H */
diff --git a/hypervisor/arch/arm/psci.c b/hypervisor/arch/arm/psci.c
index 132d6a0..9682505 100644
--- a/hypervisor/arch/arm/psci.c
+++ b/hypervisor/arch/arm/psci.c
@@ -11,8 +11,10 @@
*/

#include <asm/control.h>
+#include <asm/percpu.h>
#include <asm/psci.h>
#include <asm/traps.h>
+#include <jailhouse/control.h>

void _psci_cpu_off(struct psci_mbox *);
long _psci_cpu_on(struct psci_mbox *, unsigned long, unsigned long);
@@ -72,3 +74,75 @@ int psci_wait_cpu_stopped(unsigned int cpu_id)

return -EBUSY;
}
+
+static long psci_emulate_cpu_on(struct per_cpu *cpu_data,
+ struct trap_context *ctx)
+{
+ unsigned int target = ctx->regs[1];
+ unsigned int cpu;
+ struct psci_mbox *mbox;
+
+ cpu = cpu_virt2phys(cpu_data->cell, target);
+ if (cpu == -1)
+ /* Virtual id not in set */
+ return PSCI_DENIED;
+
+ mbox = &(per_cpu(cpu)->guest_mbox);
+ mbox->entry = ctx->regs[2];
+ mbox->context = ctx->regs[3];
+
+ return psci_resume(cpu);
+}
+
+/* Returns the secondary address set by the guest */
+unsigned long psci_emulate_spin(struct per_cpu *cpu_data)
+{
+ struct psci_mbox *mbox = &(cpu_data->guest_mbox);
+
+ mbox->entry = 0;
+
+ /* Wait for emulate_cpu_on or a trapped mmio to the mbox */
+ while (mbox->entry == 0)
+ psci_suspend(cpu_data);
+
+ return mbox->entry;
+}
+
+int psci_cell_init(struct cell *cell)
+{
+ unsigned int cpu;
+
+ for_each_cpu(cpu, cell->cpu_set) {
+ per_cpu(cpu)->guest_mbox.entry = 0;
+ per_cpu(cpu)->guest_mbox.context = 0;
+ }
+
+ return 0;
+}
+
+long psci_dispatch(struct per_cpu *cpu_data, struct trap_context *ctx)
+{
+ u32 function_id = ctx->regs[0];
+
+ switch (function_id) {
+ case PSCI_VERSION:
+ /* Major[31:16], minor[15:0] */
+ return 2;
+
+ case PSCI_CPU_OFF:
+ /*
+ * The reset function will take care of calling
+ * psci_emulate_spin
+ */
+ arch_reset_self(cpu_data);
+
+ /* Not reached */
+ return 0;
+
+ case PSCI_CPU_ON_32:
+ return psci_emulate_cpu_on(cpu_data, ctx);
+
+ default:
+ return PSCI_NOT_SUPPORTED;
+ }
+}
diff --git a/hypervisor/arch/arm/psci_low.S b/hypervisor/arch/arm/psci_low.S
index 76eeaba..58bdc0a 100644
--- a/hypervisor/arch/arm/psci_low.S
+++ b/hypervisor/arch/arm/psci_low.S
@@ -13,6 +13,17 @@
#include <asm/head.h>
#include <asm/psci.h>

+ .arch_extension sec
+ .globl smc
+ /*
+ * Since we trap all SMC instructions, it may be useful to forward them
+ * when it isn't a PSCI call. The shutdown code will also have to issue
+ * a real PSCI_OFF call on secondary CPUs.
+ */
+smc:
+ smc #0
+ bx lr
+
.global _psci_cpu_off
/* r0: struct psci_mbox* */
_psci_cpu_off:
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 5593c78..d2b6ff0 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -59,7 +59,8 @@ int arch_init_early(void)
int arch_cpu_init(struct per_cpu *cpu_data)
{
int err = 0;
- unsigned long hcr = HCR_VM_BIT | HCR_IMO_BIT | HCR_FMO_BIT;
+ unsigned long hcr = HCR_VM_BIT | HCR_IMO_BIT | HCR_FMO_BIT
+ | HCR_TSC_BIT;

cpu_data->psci_mbox.entry = 0;
cpu_data->virt_id = cpu_data->cpu_id;
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index edbb811..d18794e 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -17,6 +17,7 @@
#include <asm/control.h>
#include <asm/gic_common.h>
#include <asm/platform.h>
+#include <asm/psci.h>
#include <asm/traps.h>
#include <asm/sysregs.h>
#include <jailhouse/printk.h>
@@ -204,11 +205,28 @@ static void dump_guest_regs(struct per_cpu *cpu_data, struct trap_context *ctx)
panic_printk("\n");
}

+static int arch_handle_smc(struct per_cpu *cpu_data, struct trap_context *ctx)
+{
+ unsigned long *regs = ctx->regs;
+
+ if (IS_PSCI_FN(regs[0]))
+ regs[0] = psci_dispatch(cpu_data, ctx);
+ else
+ regs[0] = smc(regs[0], regs[1], regs[2], regs[3]);
+
+ arch_skip_instruction(ctx);
+
+ return TRAP_HANDLED;
+}
+
static int arch_handle_hvc(struct per_cpu *cpu_data, struct trap_context *ctx)
{
unsigned long *regs = ctx->regs;

- regs[0] = hypercall(cpu_data, regs[0], regs[1], regs[2]);
+ if (IS_PSCI_FN(regs[0]))
+ regs[0] = psci_dispatch(cpu_data, ctx);
+ else
+ regs[0] = hypercall(cpu_data, regs[0], regs[1], regs[2]);

return TRAP_HANDLED;
}
@@ -247,6 +265,7 @@ static const trap_handler trap_handlers[38] =
{
[ESR_EC_CP15_64] = arch_handle_cp15_64,
[ESR_EC_HVC] = arch_handle_hvc,
+ [ESR_EC_SMC] = arch_handle_smc,
[ESR_EC_DABT] = arch_handle_dabt,
};
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:18 UTC
Permalink
This patch adds the necessary code for handling MMIO accesses. The trap
handler fills a mmio_access struct according to the fields in the ESR,
and passes it to all relevant sub-handlers.
If all return UNHANDLED, the access is considered invalid and the CPU is
put into failed mode.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 3 +-
hypervisor/arch/arm/include/asm/bitops.h | 12 ++
hypervisor/arch/arm/include/asm/processor.h | 2 +
hypervisor/arch/arm/include/asm/traps.h | 14 +++
hypervisor/arch/arm/mmio.c | 162 +++++++++++++++++++++++++++
hypervisor/arch/arm/traps.c | 7 +-
6 files changed, 196 insertions(+), 4 deletions(-)
create mode 100644 hypervisor/arch/arm/mmio.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 2a4d343..1b0a59c 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -14,7 +14,8 @@ KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))

always := built-in.o

-obj-y := entry.o dbg-write.o exception.o setup.o control.o lib.o traps.o
+obj-y := entry.o dbg-write.o exception.o setup.o control.o lib.o
+obj-y += traps.o mmio.o
obj-y += paging.o mmu_hyp.o mmu_cell.o caches.o
obj-y += psci.o psci_low.o spin.o
obj-y += irqchip.o
diff --git a/hypervisor/arch/arm/include/asm/bitops.h b/hypervisor/arch/arm/include/asm/bitops.h
index de63d39..a15614b 100644
--- a/hypervisor/arch/arm/include/asm/bitops.h
+++ b/hypervisor/arch/arm/include/asm/bitops.h
@@ -121,5 +121,17 @@ static inline unsigned long ffzl(unsigned long word)
return ffsl(~word);
}

+/* Extend the value of 'size' bits to a signed long */
+static inline unsigned long sign_extend(unsigned long val, unsigned int size)
+{
+ unsigned long mask;
+
+ if (size >= sizeof(unsigned long) * 8)
+ return val;
+
+ mask = 1U << (size - 1);
+ return (val ^ mask) - mask;
+}
+
#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_BITOPS_H */
diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 48b803d..78223d1 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -32,6 +32,8 @@
#define PSR_F_BIT (1 << 6)
#define PSR_I_BIT (1 << 7)
#define PSR_A_BIT (1 << 8)
+#define PSR_E_BIT (1 << 9)
+#define PSR_J_BIT (1 << 24)
#define PSR_IT_MASK(it) (((it) & 0x3) << 25 | ((it) & 0xfc) << 8)
#define PSR_IT(psr) (((psr) >> 25 & 0x3) | ((psr) >> 8 & 0xfc))

diff --git a/hypervisor/arch/arm/include/asm/traps.h b/hypervisor/arch/arm/include/asm/traps.h
index b18709b..c4a0375 100644
--- a/hypervisor/arch/arm/include/asm/traps.h
+++ b/hypervisor/arch/arm/include/asm/traps.h
@@ -33,6 +33,13 @@ struct trap_context {
u32 pc;
};

+struct mmio_access {
+ unsigned long addr;
+ bool is_write;
+ unsigned int size;
+ unsigned long val;
+};
+
typedef int (*trap_handler)(struct per_cpu *cpu_data,
struct trap_context *ctx);

@@ -87,5 +94,12 @@ static inline void access_usr_reg(struct trap_context *ctx, u8 reg,
ctx->regs[reg] = *val;
}

+void access_cell_reg(struct trap_context *ctx, u8 reg, unsigned long *val,
+ bool is_read);
+void arch_skip_instruction(struct trap_context *ctx);
+
+int arch_handle_dabt(struct per_cpu *cpu_data, struct trap_context *ctx);
+int arch_mmio_access(struct mmio_access *access);
+
#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_TRAPS_H */
diff --git a/hypervisor/arch/arm/mmio.c b/hypervisor/arch/arm/mmio.c
new file mode 100644
index 0000000..56e6f35
--- /dev/null
+++ b/hypervisor/arch/arm/mmio.c
@@ -0,0 +1,162 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/io.h>
+#include <asm/irqchip.h>
+#include <asm/processor.h>
+#include <asm/traps.h>
+
+/* Taken from the ARM ARM pseudocode for taking a data abort */
+static void arch_inject_dabt(struct trap_context *ctx, unsigned long addr)
+{
+ unsigned int lr_offset;
+ unsigned long vbar;
+ bool is_thumb;
+ u32 sctlr, ttbcr;
+
+ arm_read_sysreg(SCTLR_EL1, sctlr);
+ arm_read_sysreg(TTBCR, ttbcr);
+
+ /* Set cpsr */
+ is_thumb = ctx->cpsr & PSR_T_BIT;
+ ctx->cpsr &= ~(PSR_MODE_MASK | PSR_IT_MASK(0xff) | PSR_T_BIT
+ | PSR_J_BIT | PSR_E_BIT);
+ ctx->cpsr |= (PSR_ABT_MODE | PSR_I_BIT | PSR_A_BIT);
+ if (sctlr & SCTLR_TE_BIT)
+ ctx->cpsr |= PSR_T_BIT;
+ if (sctlr & SCTLR_EE_BIT)
+ ctx->cpsr |= PSR_E_BIT;
+
+ lr_offset = (is_thumb ? 4 : 0);
+ arm_write_banked_reg(LR_abt, ctx->pc + lr_offset);
+
+ /* Branch to dabt vector */
+ if (sctlr & SCTLR_V_BIT)
+ vbar = 0xffff0000;
+ else
+ arm_read_sysreg(VBAR, vbar);
+ ctx->pc = vbar + 0x10;
+
+ /* Signal a debug fault. DFSR layout depends on the LPAE bit */
+ if (ttbcr >> 31)
+ arm_write_sysreg(DFSR, (1 << 9) | 0x22);
+ else
+ arm_write_sysreg(DFSR, 0x2);
+ arm_write_sysreg(DFAR, addr);
+}
+
+int arch_mmio_access(struct mmio_access *access)
+{
+ void *addr = (void *)access->addr;
+
+ if (access->is_write) {
+ switch (access->size) {
+ case 1:
+ writeb_relaxed(access->val, addr);
+ break;
+ case 2:
+ writew_relaxed(access->val, addr);
+ break;
+ case 4:
+ writel_relaxed(access->val, addr);
+ break;
+ default:
+ return -EINVAL;
+ }
+ } else {
+ switch (access->size) {
+ case 1:
+ access->val = readb_relaxed(addr);
+ break;
+ case 2:
+ access->val = readw_relaxed(addr);
+ break;
+ case 4:
+ access->val = readl_relaxed(addr);
+ break;
+ default:
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+int arch_handle_dabt(struct per_cpu *cpu_data, struct trap_context *ctx)
+{
+ struct mmio_access access;
+ unsigned long hpfar;
+ unsigned long hdfar;
+ int ret = TRAP_UNHANDLED;
+ /* Decode the syndrome fields */
+ u32 icc = ESR_ICC(ctx->esr);
+ u32 isv = icc >> 24;
+ u32 sas = icc >> 22 & 0x3;
+ u32 sse = icc >> 21 & 0x1;
+ u32 srt = icc >> 16 & 0xf;
+ u32 ea = icc >> 9 & 0x1;
+ u32 cm = icc >> 8 & 0x1;
+ u32 s1ptw = icc >> 7 & 0x1;
+ u32 is_write = icc >> 6 & 0x1;
+ u32 size = 1 << sas;
+
+ arm_read_sysreg(HPFAR, hpfar);
+ arm_read_sysreg(HDFAR, hdfar);
+ access.addr = hpfar << 8;
+ access.addr |= hdfar & 0xfff;
+
+ /*
+ * Invalid instruction syndrome means multiple access or writeback, there
+ * is nothing we can do.
+ */
+ if (!isv || size > sizeof(unsigned long))
+ goto error_unhandled;
+
+ /* Re-inject abort during page walk, cache maintenance or external */
+ if (s1ptw || ea || cm) {
+ arch_inject_dabt(ctx, hdfar);
+ return TRAP_HANDLED;
+ }
+
+ if (is_write) {
+ /* Load the value to write from the src register */
+ access_cell_reg(ctx, srt, &access.val, true);
+ if (sse)
+ access.val = sign_extend(access.val, 8 * size);
+ } else {
+ access.val = 0;
+ }
+ access.is_write = is_write;
+ access.size = size;
+
+ /* ret = sub-handler call... */
+
+ if (ret == TRAP_HANDLED) {
+ /* Put the read value into the dest register */
+ if (!is_write) {
+ if (sse)
+ access.val = sign_extend(access.val, 8 * size);
+ access_cell_reg(ctx, srt, &access.val, false);
+ }
+
+ arch_skip_instruction(ctx);
+ }
+
+ if (ret != TRAP_UNHANDLED)
+ return ret;
+
+error_unhandled:
+ panic_printk("Unhandled data %s at 0x%x(%d)\n",
+ (is_write ? "write" : "read"), access.addr, size);
+
+ return ret;
+}
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index 2de6293..edbb811 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -115,7 +115,7 @@ static void arch_advance_itstate(struct trap_context *ctx)
ctx->cpsr = cpsr;
}

-static void arch_skip_instruction(struct trap_context *ctx)
+void arch_skip_instruction(struct trap_context *ctx)
{
u32 instruction_length = ESR_IL(ctx->esr);

@@ -123,8 +123,8 @@ static void arch_skip_instruction(struct trap_context *ctx)
arch_advance_itstate(ctx);
}

-static void access_cell_reg(struct trap_context *ctx, u8 reg,
- unsigned long *val, bool is_read)
+void access_cell_reg(struct trap_context *ctx, u8 reg, unsigned long *val,
+ bool is_read)
{
unsigned long mode = ctx->cpsr & PSR_MODE_MASK;

@@ -247,6 +247,7 @@ static const trap_handler trap_handlers[38] =
{
[ESR_EC_CP15_64] = arch_handle_cp15_64,
[ESR_EC_HVC] = arch_handle_hvc,
+ [ESR_EC_DABT] = arch_handle_dabt,
};

void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs)
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:26 UTC
Permalink
This patch stores the hypervisor stub vectors before installing EL2, in
order to reset them on shutdown. It assumes that they are the same on all
CPUs.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/exception.S | 10 ++++++++++
hypervisor/arch/arm/include/asm/processor.h | 1 +
hypervisor/arch/arm/mmu_hyp.c | 11 +++++++++++
3 files changed, 22 insertions(+)

diff --git a/hypervisor/arch/arm/exception.S b/hypervisor/arch/arm/exception.S
index ede7a13..6701aac 100644
--- a/hypervisor/arch/arm/exception.S
+++ b/hypervisor/arch/arm/exception.S
@@ -69,3 +69,13 @@ vmreturn:
/* Restore usr regs */
pop {r0-r12, lr}
eret
+
+ /*
+ * Hypervisor calling convention follows the AAPCS:
+ * r0-r3: arguments
+ * r0: return value
+ */
+.globl hvc
+hvc:
+ hvc #0
+ bx lr
diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 19872d1..c896990 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -172,6 +172,7 @@ struct registers {
#define sev() asm volatile("sev\n")

unsigned int smc(unsigned int r0, ...);
+unsigned int hvc(unsigned int r0, ...);

static inline void cpu_relax(void)
{
diff --git a/hypervisor/arch/arm/mmu_hyp.c b/hypervisor/arch/arm/mmu_hyp.c
index 8c460da..38eacbd 100644
--- a/hypervisor/arch/arm/mmu_hyp.c
+++ b/hypervisor/arch/arm/mmu_hyp.c
@@ -31,6 +31,9 @@ static struct {

extern unsigned long trampoline_start, trampoline_end;

+/* When disabling Jailhouse, we will need to restore the Linux stub */
+static unsigned long saved_vectors = 0;
+
static int set_id_map(int i, unsigned long address, unsigned long size)
{
if (i >= ARRAY_SIZE(id_maps))
@@ -195,6 +198,14 @@ int switch_exception_level(struct per_cpu *cpu_data)
JAILHOUSE_BASE);

/*
+ * The hypervisor stub allows to fetch its current vector base by doing
+ * an HVC with r0 = -1. They will need to be restored when disabling
+ * jailhouse.
+ */
+ if (saved_vectors == 0)
+ saved_vectors = hvc(-1);
+
+ /*
* paging struct won't be easily accessible when initializing el2, only
* the CPU datas will be readable at their physical address
*/
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:17 UTC
Permalink
This patch adds exhaustive handling of hypervisor errors, and the
ability to stop and park CPUs after dumping their EL1 context, when they
encounter an unhandled trap.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 62 +++++++++++++++++++++++++--
hypervisor/arch/arm/exception.S | 21 ++++++---
hypervisor/arch/arm/include/asm/processor.h | 9 +++-
hypervisor/arch/arm/include/asm/sysregs.h | 3 ++
hypervisor/arch/arm/setup.c | 2 -
hypervisor/arch/arm/traps.c | 28 ++++++++++--
6 files changed, 109 insertions(+), 16 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 9ff26d7..a614483 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -17,6 +17,7 @@
#include <asm/traps.h>
#include <jailhouse/control.h>
#include <jailhouse/printk.h>
+#include <jailhouse/processor.h>
#include <jailhouse/string.h>

static void arch_reset_el1(struct registers *regs)
@@ -139,6 +140,28 @@ static void arch_suspend_self(struct per_cpu *cpu_data)
arch_cpu_tlb_flush(cpu_data);
}

+static void arch_dump_exit(const char *reason)
+{
+ unsigned long pc;
+
+ arm_read_banked_reg(ELR_hyp, pc);
+ printk("Unhandled HYP %s exit at 0x%x\n", reason, pc);
+}
+
+static void arch_dump_abt(bool is_data)
+{
+ u32 hxfar;
+ u32 esr;
+
+ arm_read_sysreg(ESR_EL2, esr);
+ if (is_data)
+ arm_read_sysreg(HDFAR, hxfar);
+ else
+ arm_read_sysreg(HIFAR, hxfar);
+
+ printk(" paddr=0x%lx esr=0x%x\n", hxfar, esr);
+}
+
struct registers* arch_handle_exit(struct per_cpu *cpu_data,
struct registers *regs)
{
@@ -149,10 +172,27 @@ struct registers* arch_handle_exit(struct per_cpu *cpu_data,
case EXIT_REASON_TRAP:
arch_handle_trap(cpu_data, regs);
break;
+
+ case EXIT_REASON_UNDEF:
+ arch_dump_exit("undef");
+ panic_stop(cpu_data);
+ case EXIT_REASON_DABT:
+ arch_dump_exit("data abort");
+ arch_dump_abt(true);
+ panic_stop(cpu_data);
+ case EXIT_REASON_PABT:
+ arch_dump_exit("prefetch abort");
+ arch_dump_abt(false);
+ panic_stop(cpu_data);
+ case EXIT_REASON_HVC:
+ arch_dump_exit("hvc");
+ panic_stop(cpu_data);
+ case EXIT_REASON_FIQ:
+ arch_dump_exit("fiq");
+ panic_stop(cpu_data);
default:
- printk("Internal error: %d exit not implemented\n",
- regs->exit_reason);
- while(1);
+ arch_dump_exit("unknown");
+ panic_stop(cpu_data);
}

return regs;
@@ -269,3 +309,19 @@ void arch_config_commit(struct per_cpu *cpu_data,

arch_cpu_tlb_flush(cpu_data);
}
+
+void arch_panic_stop(struct per_cpu *cpu_data)
+{
+ psci_cpu_off(cpu_data);
+ __builtin_unreachable();
+}
+
+void arch_panic_halt(struct per_cpu *cpu_data)
+{
+ /* Won't return to panic_halt */
+ if (phys_processor_id() == panic_cpu)
+ panic_in_progress = 0;
+
+ psci_cpu_off(cpu_data);
+ __builtin_unreachable();
+}
diff --git a/hypervisor/arch/arm/exception.S b/hypervisor/arch/arm/exception.S
index 6190098..ede7a13 100644
--- a/hypervisor/arch/arm/exception.S
+++ b/hypervisor/arch/arm/exception.S
@@ -19,13 +19,13 @@
.align 5
hyp_vectors:
b .
- b .
- b .
- b .
- b .
+ b hyp_undef
+ b hyp_hvc
+ b hyp_pabt
+ b hyp_dabt
b hyp_trap
b hyp_irq
- b .
+ b hyp_fiq

.macro handle_vmexit exit_reason
/* Fill the struct registers. Should comply with NUM_USR_REGS */
@@ -34,8 +34,19 @@ hyp_vectors:
b vmexit_common
.endm

+hyp_undef:
+ handle_vmexit EXIT_REASON_UNDEF
+hyp_hvc:
+ handle_vmexit EXIT_REASON_HVC
+hyp_pabt:
+ handle_vmexit EXIT_REASON_PABT
+hyp_dabt:
+ handle_vmexit EXIT_REASON_DABT
+
hyp_irq:
handle_vmexit EXIT_REASON_IRQ
+hyp_fiq:
+ handle_vmexit EXIT_REASON_FIQ
hyp_trap:
handle_vmexit EXIT_REASON_TRAP

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index f231e16..48b803d 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -142,8 +142,13 @@
#define ESR_ICC_CV_BIT (1 << 24)
#define ESR_ICC_COND(icc) ((icc) >> 20 & 0xf)

-#define EXIT_REASON_TRAP 0x1
-#define EXIT_REASON_IRQ 0x2
+#define EXIT_REASON_UNDEF 0x1
+#define EXIT_REASON_HVC 0x2
+#define EXIT_REASON_PABT 0x3
+#define EXIT_REASON_DABT 0x4
+#define EXIT_REASON_TRAP 0x5
+#define EXIT_REASON_IRQ 0x6
+#define EXIT_REASON_FIQ 0x7

#define NUM_USR_REGS 14

diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index 1f9abeb..4760756 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -69,6 +69,9 @@
#define VBAR SYSREG_32(0, c12, c0, 0)
#define HCR SYSREG_32(4, c1, c1, 0)
#define HCR2 SYSREG_32(4, c1, c1, 4)
+#define HDFAR SYSREG_32(4, c6, c0, 0)
+#define HIFAR SYSREG_32(4, c6, c0, 2)
+#define HPFAR SYSREG_32(4, c6, c0, 4)
#define HMAIR0 SYSREG_32(4, c10, c2, 0)
#define HMAIR1 SYSREG_32(4, c10, c2, 1)
#define HVBAR SYSREG_32(4, c12, c0, 0)
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 8f76fa9..ba7de4a 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -125,5 +125,3 @@ void arch_cpu_restore(struct per_cpu *cpu_data)
#include <jailhouse/string.h>
void arch_shutdown_cpu(unsigned int cpu_id) {}
void arch_shutdown(void) {}
-void arch_panic_stop(struct per_cpu *cpu_data) {__builtin_unreachable();}
-void arch_panic_halt(struct per_cpu *cpu_data) {}
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index 1016ece..2de6293 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -188,6 +188,22 @@ static void access_cell_reg(struct trap_context *ctx, u8 reg,
}
}

+static void dump_guest_regs(struct per_cpu *cpu_data, struct trap_context *ctx)
+{
+ u8 reg;
+ unsigned long reg_val;
+
+ panic_printk("pc=0x%08x cpsr=0x%08x esr=0x%08x\n", ctx->pc, ctx->cpsr,
+ ctx->esr);
+ for (reg = 0; reg < 15; reg++) {
+ access_cell_reg(ctx, reg, &reg_val, true);
+ panic_printk("r%d=0x%08x ", reg, reg_val);
+ if ((reg + 1) % 4 == 0)
+ panic_printk("\n");
+ }
+ panic_printk("\n");
+}
+
static int arch_handle_hvc(struct per_cpu *cpu_data, struct trap_context *ctx)
{
unsigned long *regs = ctx->regs;
@@ -257,10 +273,14 @@ void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs)
if (trap_handlers[exception_class])
ret = trap_handlers[exception_class](cpu_data, &ctx);

- if (ret != TRAP_HANDLED) {
- panic_printk("CPU%d: Unhandled HYP trap, syndrome 0x%x\n",
- cpu_data->cpu_id, ctx.esr);
- while(1);
+ switch (ret) {
+ case TRAP_UNHANDLED:
+ case TRAP_FORBIDDEN:
+ panic_printk("FATAL: %s on CPU%d\n", (ret == TRAP_UNHANDLED ?
+ "unhandled trap" : "forbidden access"),
+ cpu_data->cpu_id);
+ dump_guest_regs(cpu_data, &ctx);
+ panic_halt(cpu_data);
}

restore_context:
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:14 UTC
Permalink
Before jumping to EL2, which has its MMU disabled, the data caches need
to be cleaned, in order to be coherent with the EL1 context.
This patch implements the complete data cache flush by set/way, and
enables the EL2 caches if possible.

Please note that the hypervisor always assumes that the kernel sets up
its memory in a coherent way between the cores, which means that all the
relevant memory regions (ie. the Jailhouse code and datas) are supposed
to be cacheable and inner-shareable, so that only a clean is needed
before turning the caches off.
In the short section where the MMU is off, the hypervisor doesn't write
anything to memory.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 2 +-
hypervisor/arch/arm/caches.S | 88 +++++++++++++++++++++++++++++
hypervisor/arch/arm/include/asm/control.h | 4 ++
hypervisor/arch/arm/include/asm/sysregs.h | 7 +++
hypervisor/arch/arm/mmu_hyp.c | 32 ++++++++---
5 files changed, 125 insertions(+), 8 deletions(-)
create mode 100644 hypervisor/arch/arm/caches.S

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 6445d15..2a4d343 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -15,7 +15,7 @@ KBUILD_AFLAGS := $(filter-out -include asm/unified.h,$(KBUILD_AFLAGS))
always := built-in.o

obj-y := entry.o dbg-write.o exception.o setup.o control.o lib.o traps.o
-obj-y += paging.o mmu_hyp.o mmu_cell.o
+obj-y += paging.o mmu_hyp.o mmu_cell.o caches.o
obj-y += psci.o psci_low.o spin.o
obj-y += irqchip.o
obj-$(CONFIG_ARM_GIC_V3) += gic-v3.o
diff --git a/hypervisor/arch/arm/caches.S b/hypervisor/arch/arm/caches.S
new file mode 100644
index 0000000..f965e6a
--- /dev/null
+++ b/hypervisor/arch/arm/caches.S
@@ -0,0 +1,88 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/control.h>
+#include <asm/head.h>
+#include <asm/sysregs.h>
+
+/*
+ * Clean the whole data cache
+ * Taken from the ARM ARM example code (B2.2.7)
+ * r0: 0 - clean
+ * 1 - clean + invalidate
+ */
+ .global arch_cpu_dcaches_flush
+arch_cpu_dcaches_flush:
+ push {r0-r11}
+ mov r11, r0
+
+ dsb
+ arm_read_sysreg(CLIDR_EL1, r0)
+ ands r3, r0, #0x07000000
+ lsr r3, #23 @ Extract level of coherency
+ beq finish
+
+ mov r9, #0 @ Cache level - 1
+ @ Loop caches
+loop_caches:
+ add r2, r9, r9, lsr #1
+ lsr r1, r0, r2 @ Extract current level type
+ and r1, r1, #7
+ cmp r1, #2
+ blt next_cache @ No cache or instruction only
+
+ arm_write_sysreg(CSSELR_EL1, r9)
+ isb @ sync selector change
+ arm_read_sysreg(CSSIDR_EL1, r1)
+
+ and r2, r1, #7 @ extract log2(line size - 4)
+ add r2, #4
+ ldr r4, =0x3ff
+ ands r4, r4, r1, lsr #3
+ clz r5, r4 @ Max way size
+ mov r8, r5 @ Working copy of the way size
+
+loop_sets:
+ ldr r7, =0x7fff
+ ands r7, r7, r1, lsr #13 @ Max number of index size
+loop_ways:
+ orr r10, r9, r8, lsl r5 @ Factor in the way and cache numbers
+ orr r10, r10, r7, lsl r2 @ Factor in the index number
+
+ cmp r11, #CACHES_CLEAN
+ bne 1f
+ arm_write_sysreg(DCCSW, r10) @ Clean
+ b 2f
+1: arm_write_sysreg(DCCISW, r10) @ Clean+Invalidate
+2:
+ subs r7, r7, #1 @ decrement index
+ bge loop_ways
+ subs r8, r8, #1 @ decrement way
+ bge loop_sets
+
+next_cache:
+ add r9, r9, #2 @ increment cache number
+ cmp r3, r9
+ bgt loop_caches
+ dsb
+
+finish: isb
+ pop {r0-r11}
+ bx lr
+
+ .global arch_cpu_icache_flush
+arch_cpu_icache_flush:
+ dsb
+ arm_write_sysreg(ICIALLU, r0) @ r0 value is ignored
+ dsb
+ isb
+ bx lr
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index 78ecbd6..2ada50d 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -19,8 +19,12 @@
#define SGI_INJECT 0
#define SGI_CPU_OFF 1

+#define CACHES_CLEAN 0
+#define CACHES_CLEAN_INVALIDATE 1
+
#ifndef __ASSEMBLY__

+void arch_cpu_dcaches_flush(unsigned int action);
int arch_mmu_cell_init(struct cell *cell);
void arch_mmu_cell_destroy(struct cell *cell);
int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data);
diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index 9ed2d4e..347ad04 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -36,6 +36,8 @@
#define ACTLR_EL1 SYSREG_32(0, c1, c0, 1)
#define CPACR_EL1 SYSREG_32(0, c1, c0, 2)
#define CONTEXTIDR_EL1 SYSREG_32(0, c13, c0, 1)
+#define CSSIDR_EL1 SYSREG_32(1, c0, c0, 0)
+#define CLIDR_EL1 SYSREG_32(1, c0, c0, 1)
#define CSSELR_EL1 SYSREG_32(2, c0, c0, 0)
#define SCTLR_EL2 SYSREG_32(4, c1, c0, 0)
#define ESR_EL2 SYSREG_32(4, c5, c2, 0)
@@ -90,6 +92,11 @@

#define ATS1HR SYSREG_32(4, c7, c8, 0)

+#define ICIALLUIS SYSREG_32(0, c7, c1, 0)
+#define ICIALLU SYSREG_32(0, c7, c5, 0)
+#define DCCSW SYSREG_32(0, c7, c10, 2)
+#define DCCISW SYSREG_32(0, c7, c14, 2)
+
#define TLBIALL SYSREG_32(0, c8, c7, 0)
#define TLBIALLIS SYSREG_32(0, c8, c3, 0)
#define TLBIASID SYSREG_32(0, c8, c7, 2)
diff --git a/hypervisor/arch/arm/mmu_hyp.c b/hypervisor/arch/arm/mmu_hyp.c
index fcfae05..8c460da 100644
--- a/hypervisor/arch/arm/mmu_hyp.c
+++ b/hypervisor/arch/arm/mmu_hyp.c
@@ -10,6 +10,7 @@
* the COPYING file in the top-level directory.
*/

+#include <asm/control.h>
#include <asm/setup.h>
#include <asm/setup_mmu.h>
#include <asm/sysregs.h>
@@ -102,11 +103,11 @@ setup_mmu_el2(struct per_cpu *cpu_data, phys2virt_t phys2virt, u64 ttbr)
| (TCR_RGN_WB_WA << TCR_ORGN0_SHIFT)
| (TCR_INNER_SHAREABLE << TCR_SH0_SHIFT)
| HTCR_RES1;
- u32 sctlr;
+ u32 sctlr_el1, sctlr_el2;

/* Ensure that MMU is disabled. */
- arm_read_sysreg(SCTLR_EL2, sctlr);
- if (sctlr & SCTLR_M_BIT)
+ arm_read_sysreg(SCTLR_EL2, sctlr_el2);
+ if (sctlr_el2 & SCTLR_M_BIT)
return;

/*
@@ -120,14 +121,24 @@ setup_mmu_el2(struct per_cpu *cpu_data, phys2virt_t phys2virt, u64 ttbr)
arm_write_sysreg(TTBR0_EL2, ttbr);
arm_write_sysreg(TCR_EL2, tcr);

- /* Flush TLB */
+ /*
+ * Flush HYP TLB. It should only be necessary if a previous hypervisor
+ * was running.
+ */
arm_write_sysreg(TLBIALLH, 1);
dsb(nsh);

+ /*
+ * We need coherency with the kernel in order to use the setup
+ * spinlocks: only enable the caches if they are enabled at EL1.
+ */
+ arm_read_sysreg(SCTLR_EL1, sctlr_el1);
+ sctlr_el1 &= (SCTLR_I_BIT | SCTLR_C_BIT);
+
/* Enable stage-1 translation */
- arm_read_sysreg(SCTLR_EL2, sctlr);
- sctlr |= SCTLR_M_BIT;
- arm_write_sysreg(SCTLR_EL2, sctlr);
+ arm_read_sysreg(SCTLR_EL2, sctlr_el2);
+ sctlr_el2 |= SCTLR_M_BIT | sctlr_el1;
+ arm_write_sysreg(SCTLR_EL2, sctlr_el2);
isb();

/*
@@ -201,6 +212,13 @@ int switch_exception_level(struct per_cpu *cpu_data)
return -E2BIG;
create_id_maps();

+ /*
+ * Before doing anything hairy, we need to sync the caches with memory:
+ * they will be off at EL2. From this point forward and until the caches
+ * are re-enabled, we cannot write anything critical to memory.
+ */
+ arch_cpu_dcaches_flush(CACHES_CLEAN);
+
cpu_switch_el2(phys_bootstrap, virt2phys);
/*
* At this point, we are at EL2, and we work with physical addresses.
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:20 UTC
Permalink
Since each cell has its own set of CPU ids, they can't access the
redistributor associated to their MPIDR. Instead, the MMIO accesses are
translated to their physical redistributor, and a read to the ID
register returns the virtual affinity value.

It is a bit more expensive than simply mapping the redistributor to the
cell, but the guest rarely needs to reconfigure its PPIs and IPIs, so
this patch shouldn't introduce any significant performance loss.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/gic-v3.c | 74 ++++++++++++++++++++++++++++-
hypervisor/arch/arm/include/asm/irqchip.h | 5 ++
hypervisor/arch/arm/irqchip.c | 8 ++++
hypervisor/arch/arm/mmio.c | 2 +-
4 files changed, 87 insertions(+), 2 deletions(-)

diff --git a/hypervisor/arch/arm/gic-v3.c b/hypervisor/arch/arm/gic-v3.c
index c5a108a..ddd5d4e 100644
--- a/hypervisor/arch/arm/gic-v3.c
+++ b/hypervisor/arch/arm/gic-v3.c
@@ -31,6 +31,7 @@

static unsigned int gic_num_lr;
static unsigned int gic_num_priority_bits;
+static u32 gic_version;

static void *gicr_base;
static unsigned int gicr_size;
@@ -94,7 +95,6 @@ static int gic_cpu_init(struct per_cpu *cpu_data)
{
u64 typer;
u32 pidr;
- u32 gic_version;
u32 cell_icc_ctlr, cell_icc_pmr, cell_icc_igrpen1;
u32 ich_vtr;
u32 ich_vmcr;
@@ -341,6 +341,77 @@ static int gic_inject_irq(struct per_cpu *cpu_data, struct pending_irq *irq)
return 0;
}

+static int gic_handle_redist_access(struct per_cpu *cpu_data,
+ struct mmio_access *access)
+{
+ unsigned int cpu;
+ unsigned int reg;
+ int ret = TRAP_UNHANDLED;
+ unsigned int virt_id;
+ void *virt_redist = 0;
+ void *phys_redist = 0;
+ unsigned int redist_size = (gic_version == 4) ? 0x40000 : 0x20000;
+ void *address = (void *)access->addr;
+
+ /*
+ * The redistributor accessed by the cell is not the one stored in these
+ * cpu_datas, but the one associated to its virtual id. So we first
+ * need to translate the redistributor address.
+ */
+ for_each_cpu(cpu, cpu_data->cell->cpu_set) {
+ virt_id = cpu_phys2virt(cpu);
+ virt_redist = per_cpu(virt_id)->gicr_base;
+ if (address >= virt_redist && address < virt_redist
+ + redist_size) {
+ phys_redist = per_cpu(cpu)->gicr_base;
+ break;
+ }
+ }
+
+ if (phys_redist == NULL)
+ return TRAP_FORBIDDEN;
+
+ reg = address - virt_redist;
+ access->addr = (unsigned long)phys_redist + reg;
+
+ /* Change the ID register, all other accesses are allowed. */
+ if (!access->is_write) {
+ switch (reg) {
+ case GICR_TYPER:
+ if (virt_id == cpu_data->cell->arch.last_virt_id)
+ access->val = GICR_TYPER_Last;
+ else
+ access->val = 0;
+ /* AArch64 can use a writeq for this register */
+ if (access->size == 8)
+ access->val |= (u64)virt_id << 32;
+
+ ret = TRAP_HANDLED;
+ break;
+ case GICR_TYPER + 4:
+ /* Upper bits contain the affinity */
+ access->val = virt_id;
+ ret = TRAP_HANDLED;
+ break;
+ }
+ }
+ if (ret == TRAP_HANDLED)
+ return ret;
+
+ arch_mmio_access(access);
+ return TRAP_HANDLED;
+}
+
+static int gic_mmio_access(struct per_cpu *cpu_data, struct mmio_access *access)
+{
+ void *address = (void *)access->addr;
+
+ if (address >= gicr_base && address < gicr_base + gicr_size)
+ return gic_handle_redist_access(cpu_data, access);
+
+ return TRAP_UNHANDLED;
+}
+
struct irqchip_ops gic_irqchip = {
.init = gic_init,
.cpu_init = gic_cpu_init,
@@ -349,4 +420,5 @@ struct irqchip_ops gic_irqchip = {
.handle_irq = gic_handle_irq,
.inject_irq = gic_inject_irq,
.eoi_irq = gic_eoi_irq,
+ .mmio_access = gic_mmio_access,
};
diff --git a/hypervisor/arch/arm/include/asm/irqchip.h b/hypervisor/arch/arm/include/asm/irqchip.h
index bdb7b99..a4e625d 100644
--- a/hypervisor/arch/arm/include/asm/irqchip.h
+++ b/hypervisor/arch/arm/include/asm/irqchip.h
@@ -22,6 +22,7 @@
#define MAX_PENDING_IRQS (PAGE_SIZE / sizeof(struct pending_irq))

#include <asm/percpu.h>
+#include <asm/traps.h>

#ifndef __ASSEMBLY__

@@ -51,6 +52,8 @@ struct irqchip_ops {
void (*handle_irq)(struct per_cpu *cpu_data);
void (*eoi_irq)(u32 irqn, bool deactivate);
int (*inject_irq)(struct per_cpu *cpu_data, struct pending_irq *irq);
+
+ int (*mmio_access)(struct per_cpu *cpu_data, struct mmio_access *access);
};

/* Virtual interrupts waiting to be injected */
@@ -82,6 +85,8 @@ int irqchip_send_sgi(struct sgi *sgi);
void irqchip_handle_irq(struct per_cpu *cpu_data);
void irqchip_eoi_irq(u32 irqn, bool deactivate);

+int irqchip_mmio_access(struct per_cpu *cpu_data, struct mmio_access *access);
+
int irqchip_inject_pending(struct per_cpu *cpu_data);
int irqchip_insert_pending(struct per_cpu *cpu_data, struct pending_irq *irq);
int irqchip_remove_pending(struct per_cpu *cpu_data, struct pending_irq *irq);
diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
index 16ae482..356f3be 100644
--- a/hypervisor/arch/arm/irqchip.c
+++ b/hypervisor/arch/arm/irqchip.c
@@ -223,6 +223,14 @@ int irqchip_cpu_reset(struct per_cpu *cpu_data)
return 0;
}

+int irqchip_mmio_access(struct per_cpu *cpu_data, struct mmio_access *access)
+{
+ if (irqchip.mmio_access)
+ return irqchip.mmio_access(cpu_data, access);
+
+ return TRAP_UNHANDLED;
+}
+
/* Only the GIC is implemented */
extern struct irqchip_ops gic_irqchip;

diff --git a/hypervisor/arch/arm/mmio.c b/hypervisor/arch/arm/mmio.c
index 56e6f35..c27005f 100644
--- a/hypervisor/arch/arm/mmio.c
+++ b/hypervisor/arch/arm/mmio.c
@@ -138,7 +138,7 @@ int arch_handle_dabt(struct per_cpu *cpu_data, struct trap_context *ctx)
access.is_write = is_write;
access.size = size;

- /* ret = sub-handler call... */
+ ret = irqchip_mmio_access(cpu_data, &access);

if (ret == TRAP_HANDLED) {
/* Put the read value into the dest register */
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:27 UTC
Permalink
Shutting down the GIC on the root cell consists of re-enabling direct
access to the CPU interface.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/gic-v3.c | 28 ++++++++++++++++++++++++----
hypervisor/arch/arm/include/asm/irqchip.h | 3 ++-
hypervisor/arch/arm/irqchip.c | 8 +++++++-
3 files changed, 33 insertions(+), 6 deletions(-)

diff --git a/hypervisor/arch/arm/gic-v3.c b/hypervisor/arch/arm/gic-v3.c
index d3cc678..4336550 100644
--- a/hypervisor/arch/arm/gic-v3.c
+++ b/hypervisor/arch/arm/gic-v3.c
@@ -52,11 +52,13 @@ static int gic_init(void)
return err;
}

-static int gic_cpu_reset(struct per_cpu *cpu_data)
+static int gic_cpu_reset(struct per_cpu *cpu_data, bool is_shutdown)
{
unsigned int i;
void *gicr = cpu_data->gicr_base;
unsigned long active;
+ bool root_shutdown = is_shutdown && (cpu_data->cell == &root_cell);
+ u32 ich_vmcr;

if (gicr == 0)
return -ENODEV;
@@ -73,8 +75,13 @@ static int gic_cpu_reset(struct per_cpu *cpu_data)
arm_write_sysreg(ICC_DIR_EL1, i);
}

- /* Disable all PPIs, ensure IPIs are enabled */
- writel_relaxed(0xffff0000, gicr + GICR_ICENABLER);
+ /*
+ * Disable all PPIs, ensure IPIs are enabled.
+ * On shutdown, the root cell expects to find all its PPIs still enabled
+ * when returning to the driver.
+ */
+ if (!root_shutdown)
+ writel_relaxed(0xffff0000, gicr + GICR_ICENABLER);
writel_relaxed(0x0000ffff, gicr + GICR_ISENABLER);

/* Clear active priority bits */
@@ -87,8 +94,21 @@ static int gic_cpu_reset(struct per_cpu *cpu_data)
arm_write_sysreg(ICH_AP1R3_EL2, 0);
}

+ if (root_shutdown) {
+ /* Restore the root config */
+ arm_read_sysreg(ICH_VMCR_EL2, ich_vmcr);
+
+ if (!(ich_vmcr & ICH_VMCR_VEOIM)) {
+ u32 icc_ctlr;
+ arm_read_sysreg(ICC_CTLR_EL1, icc_ctlr);
+ icc_ctlr &= ~ICC_CTLR_EOImode;
+ arm_write_sysreg(ICC_CTLR_EL1, icc_ctlr);
+ }
+
+ arm_write_sysreg(ICH_HCR_EL2, 0);
+ }
+
arm_write_sysreg(ICH_VMCR_EL2, 0);
- arm_write_sysreg(ICH_HCR_EL2, ICH_HCR_EN);

return 0;
}
diff --git a/hypervisor/arch/arm/include/asm/irqchip.h b/hypervisor/arch/arm/include/asm/irqchip.h
index 4a985aa..eb72f41 100644
--- a/hypervisor/arch/arm/include/asm/irqchip.h
+++ b/hypervisor/arch/arm/include/asm/irqchip.h
@@ -48,7 +48,7 @@ struct irqchip_ops {
int (*cpu_init)(struct per_cpu *cpu_data);
void (*cell_init)(struct cell *cell);
void (*cell_exit)(struct cell *cell);
- int (*cpu_reset)(struct per_cpu *cpu_data);
+ int (*cpu_reset)(struct per_cpu *cpu_data, bool is_shutdown);

int (*send_sgi)(struct sgi *sgi);
void (*handle_irq)(struct per_cpu *cpu_data);
@@ -82,6 +82,7 @@ struct pending_irq {
int irqchip_init(void);
int irqchip_cpu_init(struct per_cpu *cpu_data);
int irqchip_cpu_reset(struct per_cpu *cpu_data);
+void irqchip_cpu_shutdown(struct per_cpu *cpu_data);

void irqchip_cell_init(struct cell *cell);
void irqchip_cell_exit(struct cell *cell);
diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
index c03b660..b1a9a59 100644
--- a/hypervisor/arch/arm/irqchip.c
+++ b/hypervisor/arch/arm/irqchip.c
@@ -218,11 +218,17 @@ int irqchip_cpu_reset(struct per_cpu *cpu_data)
return err;

if (irqchip.cpu_reset)
- return irqchip.cpu_reset(cpu_data);
+ return irqchip.cpu_reset(cpu_data, false);

return 0;
}

+void irqchip_cpu_shutdown(struct per_cpu *cpu_data)
+{
+ if (irqchip.cpu_reset)
+ irqchip.cpu_reset(cpu_data, true);
+}
+
int irqchip_mmio_access(struct per_cpu *cpu_data, struct mmio_access *access)
{
if (irqchip.mmio_access)
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:33 UTC
Permalink
This patch implements the counters that report the number VM exits,
accessible by the driver. It also adds three statistics for the ARM
side: the number of IRQs injected, the number of IPIs injected, and the
number of GIC maintenance IRQs received.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
driver.c | 8 ++++++++
hypervisor/arch/arm/control.c | 8 ++++++++
hypervisor/arch/arm/gic-common.c | 2 ++
.../arch/arm/include/asm/jailhouse_hypercall.h | 5 ++++-
hypervisor/arch/arm/mmio.c | 2 ++
5 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/driver.c b/driver.c
index f0e68a1..f66af8b 100644
--- a/driver.c
+++ b/driver.c
@@ -149,6 +149,10 @@ JAILHOUSE_CPU_STATS_ATTR(vmexits_cr, JAILHOUSE_CPU_STAT_VMEXITS_CR);
JAILHOUSE_CPU_STATS_ATTR(vmexits_msr, JAILHOUSE_CPU_STAT_VMEXITS_MSR);
JAILHOUSE_CPU_STATS_ATTR(vmexits_cpuid, JAILHOUSE_CPU_STAT_VMEXITS_CPUID);
JAILHOUSE_CPU_STATS_ATTR(vmexits_xsetbv, JAILHOUSE_CPU_STAT_VMEXITS_XSETBV);
+#elif defined(CONFIG_ARM)
+JAILHOUSE_CPU_STATS_ATTR(vmexits_maintenance, JAILHOUSE_CPU_STAT_VMEXITS_MAINTENANCE);
+JAILHOUSE_CPU_STATS_ATTR(vmexits_virt_irq, JAILHOUSE_CPU_STAT_VMEXITS_VIRQ);
+JAILHOUSE_CPU_STATS_ATTR(vmexits_virt_sgi, JAILHOUSE_CPU_STAT_VMEXITS_VSGI);
#endif

static struct attribute *no_attrs[] = {
@@ -163,6 +167,10 @@ static struct attribute *no_attrs[] = {
&vmexits_msr_attr.kattr.attr,
&vmexits_cpuid_attr.kattr.attr,
&vmexits_xsetbv_attr.kattr.attr,
+#elif defined(CONFIG_ARM)
+ &vmexits_maintenance_attr.kattr.attr,
+ &vmexits_virt_irq_attr.kattr.attr,
+ &vmexits_virt_sgi_attr.kattr.attr,
#endif
NULL
};
diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 9aa4609..a0c6226 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -177,6 +177,8 @@ static void arch_dump_abt(bool is_data)
struct registers* arch_handle_exit(struct per_cpu *cpu_data,
struct registers *regs)
{
+ cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_TOTAL]++;
+
switch (regs->exit_reason) {
case EXIT_REASON_IRQ:
irqchip_handle_irq(cpu_data);
@@ -268,6 +270,8 @@ void arch_suspend_cpu(unsigned int cpu_id)

void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
{
+ cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_MANAGEMENT]++;
+
switch (irqn) {
case SGI_INJECT:
irqchip_inject_pending(cpu_data);
@@ -287,10 +291,14 @@ void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn)
{
if (irqn == MAINTENANCE_IRQ) {
+ cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_MAINTENANCE]++;
+
irqchip_inject_pending(cpu_data);
return true;
}

+ cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_VIRQ]++;
+
irqchip_set_pending(cpu_data, irqn, true);

return false;
diff --git a/hypervisor/arch/arm/gic-common.c b/hypervisor/arch/arm/gic-common.c
index 673b932..d6444c2 100644
--- a/hypervisor/arch/arm/gic-common.c
+++ b/hypervisor/arch/arm/gic-common.c
@@ -272,6 +272,8 @@ int gic_handle_sgir_write(struct per_cpu *cpu_data, struct sgi *sgi,
struct cell *cell = cpu_data->cell;
bool is_target = false;

+ cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_VSGI]++;
+
targets = sgi->targets;
sgi->targets = 0;

diff --git a/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h b/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h
index 1967138..918791e 100644
--- a/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h
+++ b/hypervisor/arch/arm/include/asm/jailhouse_hypercall.h
@@ -19,7 +19,10 @@
#define JAILHOUSE_CALL_ARG2 "r2"

/* CPU statistics */
-#define JAILHOUSE_NUM_CPU_STATS JAILHOUSE_GENERIC_CPU_STATS
+#define JAILHOUSE_CPU_STAT_VMEXITS_MAINTENANCE JAILHOUSE_GENERIC_CPU_STATS
+#define JAILHOUSE_CPU_STAT_VMEXITS_VIRQ JAILHOUSE_GENERIC_CPU_STATS + 1
+#define JAILHOUSE_CPU_STAT_VMEXITS_VSGI JAILHOUSE_GENERIC_CPU_STATS + 2
+#define JAILHOUSE_NUM_CPU_STATS JAILHOUSE_GENERIC_CPU_STATS + 3

#ifndef __asmeq
#define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t"
diff --git a/hypervisor/arch/arm/mmio.c b/hypervisor/arch/arm/mmio.c
index bc283a7..97a8fec 100644
--- a/hypervisor/arch/arm/mmio.c
+++ b/hypervisor/arch/arm/mmio.c
@@ -115,6 +115,8 @@ int arch_handle_dabt(struct per_cpu *cpu_data, struct trap_context *ctx)
access.addr = hpfar << 8;
access.addr |= hdfar & 0xfff;

+ cpu_data->stats[JAILHOUSE_CPU_STAT_VMEXITS_MMIO]++;
+
/*
* Invalid instruction syndrome means multiple access or writeback, there
* is nothing we can do.
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:16 UTC
Permalink
This patch fills different paging stubs:
- the arch_config_commit stub, which is used by the core each time the
memory is remapped. It allows to invalidate the TLBs on all affected
CPUs.
- the flush_cache function is used to flush the hypervisor page table
entries when using the PAGE_MAP_COHERENT flag (useful for IOMMU, not
currently in use on the arm side.)
- the arch_tlb_flush_page function is used to invalidate a TLB entry after
modifying the hypervisor paging structures. It must ignore accesses done
from the initial setup code at EL1, which are committed once at EL2 with
a TLBIALLH, just before enabling the MMU.

arch_config_commit is used in the following cases:
- in arch_init_late, after creating the root page tables. In this case,
only the master CPU is affected.
- when creating, loading or destroying a cell. In this case, TLBs are
invalidated on the current CPU, on all the root cell's CPUs after they
are resumed, and on the cell's CPUs that are being reset, by using a
`cell_pages_dirty' boolean.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 28 +++++++++++++++++++++++++++
hypervisor/arch/arm/include/asm/control.h | 1 +
hypervisor/arch/arm/include/asm/paging.h | 17 ++++++++++++++++
hypervisor/arch/arm/include/asm/percpu.h | 2 +-
hypervisor/arch/arm/include/asm/processor.h | 10 ++++++++++
hypervisor/arch/arm/include/asm/sysregs.h | 2 ++
hypervisor/arch/arm/mmu_cell.c | 17 ++++++++++++++--
hypervisor/arch/arm/setup.c | 10 ++++++++--
8 files changed, 82 insertions(+), 5 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 1988850..9ff26d7 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -134,6 +134,9 @@ static void arch_reset_self(struct per_cpu *cpu_data)
static void arch_suspend_self(struct per_cpu *cpu_data)
{
psci_suspend(cpu_data);
+
+ if (cpu_data->cell_pages_dirty)
+ arch_cpu_tlb_flush(cpu_data);
}

struct registers* arch_handle_exit(struct per_cpu *cpu_data,
@@ -241,3 +244,28 @@ void arch_cell_destroy(struct per_cpu *cpu_data, struct cell *cell)
for_each_cpu(cpu, cell->cpu_set)
arch_reset_cpu(cpu);
}
+
+void arch_config_commit(struct per_cpu *cpu_data,
+ struct cell *cell_added_removed)
+{
+ unsigned int cpu;
+
+ /*
+ * Reconfiguration of the page tables is done while the cells are
+ * spinning. They will need to flush their TLBs right after they are
+ * resumed.
+ * When init_late calls arch_config_commit, the root cell's bitmap has
+ * not yet been populated by register_root_cpu, so the only invalidated
+ * TLBs are those of the master CPU.
+ */
+ for_each_cpu_except(cpu, root_cell.cpu_set, cpu_data->cpu_id)
+ per_cpu(cpu)->cell_pages_dirty = true;
+
+ if (cell_added_removed) {
+ for_each_cpu_except(cpu, cell_added_removed->cpu_set,
+ cpu_data->cpu_id)
+ per_cpu(cpu)->cell_pages_dirty = true;
+ }
+
+ arch_cpu_tlb_flush(cpu_data);
+}
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index 592ee29..bb97ff3 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -26,6 +26,7 @@

void arch_cpu_dcaches_flush(unsigned int action);
void arch_cpu_icache_flush(void);
+void arch_cpu_tlb_flush(struct per_cpu *cpu_data);
void arch_cell_caches_flush(struct cell *cell);
int arch_mmu_cell_init(struct cell *cell);
void arch_mmu_cell_destroy(struct cell *cell);
diff --git a/hypervisor/arch/arm/include/asm/paging.h b/hypervisor/arch/arm/include/asm/paging.h
index 969c71d..b3a1c12 100644
--- a/hypervisor/arch/arm/include/asm/paging.h
+++ b/hypervisor/arch/arm/include/asm/paging.h
@@ -14,6 +14,7 @@
#define _JAILHOUSE_ASM_PAGING_H

#include <asm/processor.h>
+#include <asm/sysregs.h>
#include <asm/types.h>
#include <jailhouse/utils.h>

@@ -175,12 +176,28 @@

typedef u64 *pt_entry_t;

+/* Only executed on hypervisor paging struct changes */
static inline void arch_tlb_flush_page(unsigned long addr)
{
+ /*
+ * This instruction is UNDEF at EL1, but the whole TLB is invalidated
+ * before enabling the EL2 stage 1 MMU anyway.
+ */
+ if (is_el2())
+ arm_write_sysreg(TLBIMVAH, addr & PAGE_MASK);
}

+extern unsigned int cache_line_size;
+
+/* Used to clean the PAGE_MAP_COHERENT page table changes */
static inline void flush_cache(void *addr, long size)
{
+ do {
+ /* Clean & invalidate by MVA to PoC */
+ arm_write_sysreg(DCCIMVAC, addr);
+ size -= cache_line_size;
+ addr += cache_line_size;
+ } while (size > 0);
}

#endif /* !__ASSEMBLY__ */
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index e1c198c..5f9f4ae 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -58,7 +58,7 @@ struct per_cpu {
__attribute__((aligned(8))) struct psci_mbox psci_mbox;

bool cpu_stopped;
- bool flush_caches;
+ bool cell_pages_dirty;
int shutdown_state;
bool failed;
} __attribute__((aligned(PAGE_SIZE)));
diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 9c1fe75..f231e16 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -13,6 +13,7 @@
#ifndef _JAILHOUSE_ASM_PROCESSOR_H
#define _JAILHOUSE_ASM_PROCESSOR_H

+#include <asm/types.h>
#include <jailhouse/utils.h>

#define PSR_MODE_MASK 0xf
@@ -172,6 +173,15 @@ static inline void memory_barrier(void)
dmb(ish);
}

+static inline bool is_el2(void)
+{
+ u32 psr;
+
+ asm volatile ("mrs %0, cpsr" : "=r" (psr));
+
+ return (psr & PSR_MODE_MASK) == PSR_HYP_MODE;
+}
+
#endif /* !__ASSEMBLY__ */

#endif /* !_JAILHOUSE_ASM_PROCESSOR_H */
diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index 347ad04..1f9abeb 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -29,6 +29,7 @@
* 32bit sysregs definitions
* (Use the AArch64 names to ease the compatibility work)
*/
+#define CTR_EL0 SYSREG_32(0, c0, c0, 1)
#define MPIDR_EL1 SYSREG_32(0, c0, c0, 5)
#define ID_PFR0_EL1 SYSREG_32(0, c0, c1, 0)
#define ID_PFR1_EL1 SYSREG_32(0, c0, c1, 1)
@@ -94,6 +95,7 @@

#define ICIALLUIS SYSREG_32(0, c7, c1, 0)
#define ICIALLU SYSREG_32(0, c7, c5, 0)
+#define DCCIMVAC SYSREG_32(0, c7, c10, 1)
#define DCCSW SYSREG_32(0, c7, c10, 2)
#define DCCISW SYSREG_32(0, c7, c14, 2)

diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index e7e57f7..3aef369 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -95,14 +95,27 @@ int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)
arm_write_sysreg(VTTBR_EL2, vttbr);
arm_write_sysreg(VTCR_EL2, vtcr);

+ /* Ensure that the new VMID is present before flushing the caches */
isb();
/*
+ * At initialisation, arch_config_commit does not act on other CPUs,
+ * since they register themselves to the root cpu_set afterwards. It
+ * means that this unconditionnal flush is redundant on master CPU.
+ */
+ arch_cpu_tlb_flush(cpu_data);
+
+ return 0;
+}
+
+void arch_cpu_tlb_flush(struct per_cpu *cpu_data)
+{
+ /*
* Invalidate all stage-1 and 2 TLB entries for the current VMID
* ERET will ensure completion of these ops
*/
arm_write_sysreg(TLBIALL, 1);
-
- return 0;
+ dsb(nsh);
+ cpu_data->cell_pages_dirty = false;
}

void arch_cell_caches_flush(struct cell *cell)
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index ebd1716..8f76fa9 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -21,14 +21,22 @@
#include <jailhouse/paging.h>
#include <jailhouse/string.h>

+unsigned int cache_line_size;
+
static int arch_check_features(void)
{
u32 pfr1;
+ u32 ctr;
+
arm_read_sysreg(ID_PFR1_EL1, pfr1);

if (!PFR1_VIRT(pfr1))
return -ENODEV;

+ arm_read_sysreg(CTR_EL0, ctr);
+ /* Extract the minimal cache line size */
+ cache_line_size = 4 << (ctr >> 16 & 0xf);
+
return 0;
}

@@ -116,8 +124,6 @@ void arch_cpu_restore(struct per_cpu *cpu_data)
#include <jailhouse/control.h>
#include <jailhouse/string.h>
void arch_shutdown_cpu(unsigned int cpu_id) {}
-void arch_config_commit(struct per_cpu *cpu_data,
- struct cell *cell_added_removed) {}
void arch_shutdown(void) {}
void arch_panic_stop(struct per_cpu *cpu_data) {__builtin_unreachable();}
void arch_panic_halt(struct per_cpu *cpu_data) {}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:19 UTC
Permalink
To handle SMP guests, the cells need to be assigned virtual CPU IDs
through the VMPIDR register. For the moment, those IDs are simply
generated incrementally on each CPU.

This change will allow to use the same guest code in different cells.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 22 +++++++++++++++++++++-
hypervisor/arch/arm/gic-v3.c | 6 ++++--
hypervisor/arch/arm/include/asm/cell.h | 2 ++
hypervisor/arch/arm/include/asm/percpu.h | 19 +++++++++++++++++++
hypervisor/arch/arm/include/asm/processor.h | 3 ++-
hypervisor/arch/arm/include/asm/sysregs.h | 1 +
hypervisor/arch/arm/setup.c | 1 +
7 files changed, 50 insertions(+), 4 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index a614483..0bfcda6 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -123,6 +123,9 @@ static void arch_reset_self(struct per_cpu *cpu_data)
else
reset_address = 0;

+ /* Set the new MPIDR */
+ arm_write_sysreg(VMPIDR_EL2, cpu_data->virt_id | MPIDR_MP_BIT);
+
/* Restore an empty context */
arch_reset_el1(regs);

@@ -267,22 +270,39 @@ void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
int arch_cell_create(struct per_cpu *cpu_data, struct cell *cell)
{
int err;
+ unsigned int cpu;
+ unsigned int virt_id = 0;

err = arch_mmu_cell_init(cell);
if (err)
return err;

+ /*
+ * Generate a virtual CPU id according to the position of each CPU in
+ * the cell set
+ */
+ for_each_cpu(cpu, cell->cpu_set) {
+ per_cpu(cpu)->virt_id = virt_id;
+ virt_id++;
+ }
+ cell->arch.last_virt_id = virt_id - 1;
+
return 0;
}

void arch_cell_destroy(struct per_cpu *cpu_data, struct cell *cell)
{
unsigned int cpu;
+ struct per_cpu *percpu;

arch_mmu_cell_destroy(cell);

- for_each_cpu(cpu, cell->cpu_set)
+ for_each_cpu(cpu, cell->cpu_set) {
+ percpu = per_cpu(cpu);
+ /* Re-assign the physical IDs for the root cell */
+ percpu->virt_id = percpu->cpu_id;
arch_reset_cpu(cpu);
+ }
}

void arch_config_commit(struct per_cpu *cpu_data,
diff --git a/hypervisor/arch/arm/gic-v3.c b/hypervisor/arch/arm/gic-v3.c
index f4f88ff..c5a108a 100644
--- a/hypervisor/arch/arm/gic-v3.c
+++ b/hypervisor/arch/arm/gic-v3.c
@@ -197,7 +197,7 @@ int gicv3_handle_sgir_write(struct per_cpu *cpu_data, u64 sgir)
{
struct sgi sgi;
struct cell *cell = cpu_data->cell;
- unsigned int cpu;
+ unsigned int cpu, virt_id;
unsigned long this_cpu = cpu_data->cpu_id;
unsigned long routing_mode = !!(sgir & ICC_SGIR_ROUTING_BIT);
unsigned long targets = sgir & ICC_SGIR_TARGET_MASK;
@@ -212,7 +212,9 @@ int gicv3_handle_sgir_write(struct per_cpu *cpu_data, u64 sgir)
sgi.id = SGI_INJECT;

for_each_cpu_except(cpu, cell->cpu_set, this_cpu) {
- if (routing_mode == 0 && !test_bit(cpu, &targets))
+ virt_id = cpu_phys2virt(cpu);
+
+ if (routing_mode == 0 && !test_bit(virt_id, &targets))
continue;
else if (routing_mode == 1 && cpu == this_cpu)
continue;
diff --git a/hypervisor/arch/arm/include/asm/cell.h b/hypervisor/arch/arm/include/asm/cell.h
index 8f65a96..6bc6903 100644
--- a/hypervisor/arch/arm/include/asm/cell.h
+++ b/hypervisor/arch/arm/include/asm/cell.h
@@ -27,6 +27,8 @@ struct arch_cell {

spinlock_t caches_lock;
bool needs_flush;
+
+ unsigned int last_virt_id;
};

struct cell {
diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index 5f9f4ae..3f67ed4 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -28,6 +28,7 @@
#include <asm/cell.h>
#include <asm/psci.h>
#include <asm/spinlock.h>
+#include <jailhouse/control.h>

struct pending_irq;

@@ -40,6 +41,7 @@ struct per_cpu {
unsigned long linux_reg[NUM_ENTRY_REGS];

unsigned int cpu_id;
+ unsigned int virt_id;

/* Other CPUs can insert sgis into the pending array */
spinlock_t gic_lock;
@@ -77,6 +79,23 @@ static inline struct registers *guest_regs(struct per_cpu *cpu_data)
- sizeof(struct registers));
}

+static inline unsigned int cpu_phys2virt(unsigned int cpu_id)
+{
+ return per_cpu(cpu_id)->virt_id;
+}
+
+static inline unsigned int cpu_virt2phys(struct cell *cell, unsigned int virt_id)
+{
+ unsigned int cpu;
+
+ for_each_cpu(cpu, cell->cpu_set) {
+ if (per_cpu(cpu)->virt_id == virt_id)
+ return cpu;
+ }
+
+ return -1;
+}
+
/* Validate defines */
#define CHECK_ASSUMPTION(assume) ((void)sizeof(char[1 - 2*!(assume)]))

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 78223d1..fd0e1af 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -41,6 +41,8 @@
| PSR_32_BIT)

#define MPIDR_CPUID_MASK 0x00ffffff
+#define MPIDR_MP_BIT (1 << 31)
+#define MPIDR_U_BIT (1 << 30)

#define PFR1_VIRT(pfr) ((pfr) >> 12 & 0xf)

@@ -68,7 +70,6 @@
| SCTLR_UWXN_BIT | SCTLR_FI_BIT | SCTLR_EE_BIT \
| SCTLR_TRE_BIT | SCTLR_AFE_BIT | SCTLR_TE_BIT)

-
#define HCR_TRVM_BIT (1 << 30)
#define HCR_TVM_BIT (1 << 26)
#define HCR_HDC_BIT (1 << 29)
diff --git a/hypervisor/arch/arm/include/asm/sysregs.h b/hypervisor/arch/arm/include/asm/sysregs.h
index 4760756..3cdd634 100644
--- a/hypervisor/arch/arm/include/asm/sysregs.h
+++ b/hypervisor/arch/arm/include/asm/sysregs.h
@@ -40,6 +40,7 @@
#define CSSIDR_EL1 SYSREG_32(1, c0, c0, 0)
#define CLIDR_EL1 SYSREG_32(1, c0, c0, 1)
#define CSSELR_EL1 SYSREG_32(2, c0, c0, 0)
+#define VMPIDR_EL2 SYSREG_32(4, c0, c0, 5)
#define SCTLR_EL2 SYSREG_32(4, c1, c0, 0)
#define ESR_EL2 SYSREG_32(4, c5, c2, 0)
#define TPIDR_EL2 SYSREG_32(4, c13, c0, 2)
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index ba7de4a..b0209a2 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -62,6 +62,7 @@ int arch_cpu_init(struct per_cpu *cpu_data)
unsigned long hcr = HCR_VM_BIT | HCR_IMO_BIT | HCR_FMO_BIT;

cpu_data->psci_mbox.entry = 0;
+ cpu_data->virt_id = cpu_data->cpu_id;

/*
* Copy the registers to restore from the linux stack here, because we
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:30 UTC
Permalink
Some functions are abstract enough to be used by the GICv2 backend.
This patch moves them into the common code.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 17 ++++++
hypervisor/arch/arm/gic-common.c | 62 ++++++++++++++++++++++
hypervisor/arch/arm/gic-v3.c | 72 ++------------------------
hypervisor/arch/arm/include/asm/control.h | 1 +
hypervisor/arch/arm/include/asm/gic_common.h | 4 ++
hypervisor/arch/arm/include/asm/gic_v3.h | 8 +++
6 files changed, 95 insertions(+), 69 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 58f6bed..9aa4609 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -12,6 +12,7 @@

#include <asm/control.h>
#include <asm/irqchip.h>
+#include <asm/platform.h>
#include <asm/processor.h>
#include <asm/sysregs.h>
#include <asm/traps.h>
@@ -279,6 +280,22 @@ void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
}
}

+/*
+ * Handle the maintenance interrupt, the rest is injected into the cell.
+ * Return true when the IRQ has been handled by the hyp.
+ */
+bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn)
+{
+ if (irqn == MAINTENANCE_IRQ) {
+ irqchip_inject_pending(cpu_data);
+ return true;
+ }
+
+ irqchip_set_pending(cpu_data, irqn, true);
+
+ return false;
+}
+
int arch_cell_create(struct per_cpu *cpu_data, struct cell *cell)
{
int err;
diff --git a/hypervisor/arch/arm/gic-common.c b/hypervisor/arch/arm/gic-common.c
index dcde88e..16ca2d6 100644
--- a/hypervisor/arch/arm/gic-common.c
+++ b/hypervisor/arch/arm/gic-common.c
@@ -11,6 +11,7 @@
*/

#include <asm/cell.h>
+#include <asm/control.h>
#include <asm/gic_common.h>
#include <asm/io.h>
#include <asm/irqchip.h>
@@ -143,6 +144,37 @@ static int handle_irq_route(struct per_cpu *cpu_data,
}
}

+int gic_handle_sgir_write(struct per_cpu *cpu_data, struct sgi *sgi,
+ bool virt_input)
+{
+ unsigned int cpu;
+ unsigned long targets;
+ unsigned int this_cpu = cpu_data->cpu_id;
+ struct cell *cell = cpu_data->cell;
+ bool is_target = false;
+
+ targets = sgi->targets;
+ sgi->targets = 0;
+
+ /* Filter the targets */
+ for_each_cpu_except(cpu, cell->cpu_set, this_cpu) {
+ if (virt_input)
+ is_target = !!test_bit(cpu_phys2virt(cpu), &targets);
+
+ if (sgi->routing_mode == 0 && !is_target)
+ continue;
+
+ irqchip_set_pending(per_cpu(cpu), sgi->id, false);
+ sgi->targets |= (1 << cpu);
+ }
+
+ /* Let the other CPUS inject their SGIs */
+ sgi->id = SGI_INJECT;
+ irqchip_send_sgi(sgi);
+
+ return TRAP_HANDLED;
+}
+
int gic_handle_dist_access(struct per_cpu *cpu_data,
struct mmio_access *access)
{
@@ -203,3 +235,33 @@ int gic_handle_dist_access(struct per_cpu *cpu_data,

return ret;
}
+
+void gic_handle_irq(struct per_cpu *cpu_data)
+{
+ bool handled = false;
+ u32 irq_id;
+
+ while (1) {
+ /* Read IAR1: set 'active' state */
+ irq_id = gic_read_iar();
+
+ if (irq_id == 0x3ff) /* Spurious IRQ */
+ break;
+
+ /* Handle IRQ */
+ if (is_sgi(irq_id)) {
+ arch_handle_sgi(cpu_data, irq_id);
+ handled = true;
+ } else {
+ handled = arch_handle_phys_irq(cpu_data, irq_id);
+ }
+
+ /*
+ * Write EOIR1: drop priority, but stay active if handled is
+ * false.
+ * This allows to not be re-interrupted by a level-triggered
+ * interrupt that needs handling in the guest (e.g. timer)
+ */
+ irqchip_eoi_irq(irq_id, handled);
+ }
+}
diff --git a/hypervisor/arch/arm/gic-v3.c b/hypervisor/arch/arm/gic-v3.c
index 4336550..19ccc10 100644
--- a/hypervisor/arch/arm/gic-v3.c
+++ b/hypervisor/arch/arm/gic-v3.c
@@ -245,53 +245,17 @@ static int gic_send_sgi(struct sgi *sgi)
int gicv3_handle_sgir_write(struct per_cpu *cpu_data, u64 sgir)
{
struct sgi sgi;
- struct cell *cell = cpu_data->cell;
- unsigned int cpu, virt_id;
- unsigned long this_cpu = cpu_data->cpu_id;
unsigned long routing_mode = !!(sgir & ICC_SGIR_ROUTING_BIT);
- unsigned long targets = sgir & ICC_SGIR_TARGET_MASK;
- u32 irq = sgir >> ICC_SGIR_IRQN_SHIFT & 0xf;

/* FIXME: clusters are not supported yet. */
- sgi.targets = 0;
+ sgi.targets = sgir & ICC_SGIR_TARGET_MASK;
sgi.routing_mode = routing_mode;
sgi.aff1 = sgir >> ICC_SGIR_AFF1_SHIFT & 0xff;
sgi.aff2 = sgir >> ICC_SGIR_AFF2_SHIFT & 0xff;
sgi.aff3 = sgir >> ICC_SGIR_AFF3_SHIFT & 0xff;
- sgi.id = SGI_INJECT;
+ sgi.id = sgir >> ICC_SGIR_IRQN_SHIFT & 0xf;

- for_each_cpu_except(cpu, cell->cpu_set, this_cpu) {
- virt_id = cpu_phys2virt(cpu);
-
- if (routing_mode == 0 && !test_bit(virt_id, &targets))
- continue;
- else if (routing_mode == 1 && cpu == this_cpu)
- continue;
-
- irqchip_set_pending(per_cpu(cpu), irq, false);
- sgi.targets |= (1 << cpu);
- }
-
- /* Let the other CPUS inject their SGIs */
- gic_send_sgi(&sgi);
-
- return TRAP_HANDLED;
-}
-
-/*
- * Handle the maintenance interrupt, the rest is injected into the cell.
- * Return true when the IRQ has been handled by the hyp.
- */
-static bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn)
-{
- if (irqn == MAINTENANCE_IRQ) {
- irqchip_inject_pending(cpu_data);
- return true;
- }
-
- irqchip_set_pending(cpu_data, irqn, true);
-
- return false;
+ return gic_handle_sgir_write(cpu_data, &sgi, true);
}

static void gic_eoi_irq(u32 irq_id, bool deactivate)
@@ -301,36 +265,6 @@ static void gic_eoi_irq(u32 irq_id, bool deactivate)
arm_write_sysreg(ICC_DIR_EL1, irq_id);
}

-static void gic_handle_irq(struct per_cpu *cpu_data)
-{
- bool handled = false;
- u32 irq_id;
-
- while (1) {
- /* Read ICC_IAR1: set 'active' state */
- arm_read_sysreg(ICC_IAR1_EL1, irq_id);
-
- if (irq_id == 0x3ff) /* Spurious IRQ */
- break;
-
- /* Handle IRQ */
- if (is_sgi(irq_id)) {
- arch_handle_sgi(cpu_data, irq_id);
- handled = true;
- } else {
- handled = arch_handle_phys_irq(cpu_data, irq_id);
- }
-
- /*
- * Write ICC_EOIR1: drop priority, but stay active if handled is
- * false.
- * This allows to not be re-interrupted by a level-triggered
- * interrupt that needs handling in the guest (e.g. timer)
- */
- gic_eoi_irq(irq_id, handled);
- }
-}
-
static int gic_inject_irq(struct per_cpu *cpu_data, struct pending_irq *irq)
{
int i;
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index 7bff77f..6bcecf2 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -35,6 +35,7 @@ void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn);
void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs);
struct registers* arch_handle_exit(struct per_cpu *cpu_data,
struct registers *regs);
+bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn);
void arch_reset_self(struct per_cpu *cpu_data);
void arch_shutdown_self(struct per_cpu *cpu_data);

diff --git a/hypervisor/arch/arm/include/asm/gic_common.h b/hypervisor/arch/arm/include/asm/gic_common.h
index dc25279..9d87ccb 100644
--- a/hypervisor/arch/arm/include/asm/gic_common.h
+++ b/hypervisor/arch/arm/include/asm/gic_common.h
@@ -44,9 +44,13 @@

struct mmio_access;
struct per_cpu;
+struct sgi;

int gic_handle_dist_access(struct per_cpu *cpu_data,
struct mmio_access *access);
+int gic_handle_sgir_write(struct per_cpu *cpu_data, struct sgi *sgi,
+ bool virt_input);
+void gic_handle_irq(struct per_cpu *cpu_data);

#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_GIC_COMMON_H */
diff --git a/hypervisor/arch/arm/include/asm/gic_v3.h b/hypervisor/arch/arm/include/asm/gic_v3.h
index 3318398..5f2061f 100644
--- a/hypervisor/arch/arm/include/asm/gic_v3.h
+++ b/hypervisor/arch/arm/include/asm/gic_v3.h
@@ -252,6 +252,14 @@ static inline void gic_write_lr(unsigned int n, u64 val)
}
}

+static inline u32 gic_read_iar(void)
+{
+ u32 irq_id;
+
+ arm_read_sysreg(ICC_IAR1_EL1, irq_id);
+ return irq_id;
+}
+
struct per_cpu;
int gicv3_handle_sgir_write(struct per_cpu *cpu_data, u64 sgir);
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:31 UTC
Permalink
This patch implements the following GICv2 features:
- Remap GICC to GICV in the cells to provide a virtual interface
- Guest SGI filtering and hyp SGI handling
- IRQ injection

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 1 +
hypervisor/arch/arm/gic-common.c | 23 +++
hypervisor/arch/arm/gic-v2.c | 285 ++++++++++++++++++++++++++++
hypervisor/arch/arm/include/asm/gic_v2.h | 121 ++++++++++++
hypervisor/arch/arm/include/asm/platform.h | 16 ++
hypervisor/arch/arm/irqchip.c | 1 -
6 files changed, 446 insertions(+), 1 deletion(-)
create mode 100644 hypervisor/arch/arm/gic-v2.c
create mode 100644 hypervisor/arch/arm/include/asm/gic_v2.h

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 7beb612..9f6d121 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -20,6 +20,7 @@ obj-y += paging.o mmu_hyp.o mmu_cell.o caches.o
obj-y += psci.o psci_low.o smp.o
obj-y += irqchip.o gic-common.o
obj-$(CONFIG_ARM_GIC_V3) += gic-v3.o
+obj-$(CONFIG_ARM_GIC) += gic-v2.o
obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o
obj-$(CONFIG_ARCH_VEXPRESS) += smp-vexpress.o

diff --git a/hypervisor/arch/arm/gic-common.c b/hypervisor/arch/arm/gic-common.c
index 16ca2d6..2cf5b11 100644
--- a/hypervisor/arch/arm/gic-common.c
+++ b/hypervisor/arch/arm/gic-common.c
@@ -144,6 +144,25 @@ static int handle_irq_route(struct per_cpu *cpu_data,
}
}

+static int handle_sgir_access(struct per_cpu *cpu_data,
+ struct mmio_access *access)
+{
+ struct sgi sgi;
+ unsigned long val = access->val;
+
+ if (!access->is_write)
+ return TRAP_HANDLED;
+
+ sgi.targets = (val >> 16) & 0xff;
+ sgi.routing_mode = (val >> 24) & 0x3;
+ sgi.aff1 = 0;
+ sgi.aff2 = 0;
+ sgi.aff3 = 0;
+ sgi.id = val & 0xf;
+
+ return gic_handle_sgir_write(cpu_data, &sgi, false);
+}
+
int gic_handle_sgir_write(struct per_cpu *cpu_data, struct sgi *sgi,
bool virt_input)
{
@@ -212,6 +231,10 @@ int gic_handle_dist_access(struct per_cpu *cpu_data,
(reg & 0x3ff) / 4, 8, false);
break;

+ case GICD_SGIR:
+ ret = handle_sgir_access(cpu_data, access);
+ break;
+
case GICD_CTLR:
case GICD_TYPER:
case GICD_IIDR:
diff --git a/hypervisor/arch/arm/gic-v2.c b/hypervisor/arch/arm/gic-v2.c
new file mode 100644
index 0000000..71ae1b0
--- /dev/null
+++ b/hypervisor/arch/arm/gic-v2.c
@@ -0,0 +1,285 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/control.h>
+#include <asm/gic_common.h>
+#include <asm/io.h>
+#include <asm/irqchip.h>
+#include <asm/platform.h>
+#include <asm/setup.h>
+
+static unsigned int gic_num_lr;
+
+extern void *gicd_base;
+extern unsigned int gicd_size;
+void *gicc_base;
+unsigned int gicc_size;
+void *gicv_base;
+void *gich_base;
+unsigned int gich_size;
+
+static int gic_init(void)
+{
+ int err;
+
+ /* FIXME: parse device tree */
+ gicc_base = GICC_BASE;
+ gicc_size = GICC_SIZE;
+ gich_base = GICH_BASE;
+ gich_size = GICH_SIZE;
+ gicv_base = GICV_BASE;
+
+ err = arch_map_device(gicc_base, gicc_base, gicc_size);
+ if (err)
+ return err;
+
+ err = arch_map_device(gich_base, gich_base, gich_size);
+
+ return err;
+}
+
+static int gic_cpu_reset(struct per_cpu *cpu_data, bool is_shutdown)
+{
+ unsigned int i;
+ bool root_shutdown = is_shutdown && (cpu_data->cell == &root_cell);
+ u32 active;
+ u32 gich_vmcr = 0;
+ u32 gicc_ctlr, gicc_pmr;
+
+ /* Clear list registers */
+ for (i = 0; i < gic_num_lr; i++)
+ gic_write_lr(i, 0);
+
+ /* Deactivate all PPIs */
+ active = readl_relaxed(gicd_base + GICD_ISACTIVER);
+ for (i = 16; i < 32; i++) {
+ if (test_bit(i, (unsigned long *)&active))
+ writel_relaxed(i, gicc_base + GICC_DIR);
+ }
+
+ /* Disable PPIs if necessary */
+ if (!root_shutdown)
+ writel_relaxed(0xffff0000, gicd_base + GICD_ICENABLER);
+ /* Ensure IPIs are enabled */
+ writel_relaxed(0x0000ffff, gicd_base + GICD_ISENABLER);
+
+ writel_relaxed(0, gich_base + GICH_APR);
+
+ if (is_shutdown)
+ writel_relaxed(0, gich_base + GICH_HCR);
+
+ if (root_shutdown) {
+ gich_vmcr = readl_relaxed(gich_base + GICH_VMCR);
+ gicc_ctlr = 0;
+ gicc_pmr = (gich_vmcr >> GICH_VMCR_PMR_SHIFT) << GICV_PMR_SHIFT;
+
+ if (gich_vmcr & GICH_VMCR_EN0)
+ gicc_ctlr |= GICC_CTLR_GRPEN1;
+ if (gich_vmcr & GICH_VMCR_EOImode)
+ gicc_ctlr |= GICC_CTLR_EOImode;
+
+ writel_relaxed(gicc_ctlr, gicc_base + GICC_CTLR);
+ writel_relaxed(gicc_pmr, gicc_base + GICC_PMR);
+
+ gich_vmcr = 0;
+ }
+ writel_relaxed(gich_vmcr, gich_base + GICH_VMCR);
+
+ return 0;
+}
+
+static int gic_cpu_init(struct per_cpu *cpu_data)
+{
+ u32 vtr, vmcr;
+ u32 cell_gicc_ctlr, cell_gicc_pmr;
+
+ /* Ensure all IPIs are enabled */
+ writel_relaxed(0x0000ffff, gicd_base + GICD_ISENABLER);
+
+ cell_gicc_ctlr = readl_relaxed(gicc_base + GICC_CTLR);
+ cell_gicc_pmr = readl_relaxed(gicc_base + GICC_PMR);
+
+ writel_relaxed(GICC_CTLR_GRPEN1 | GICC_CTLR_EOImode,
+ gicc_base + GICC_CTLR);
+ writel_relaxed(GICC_PMR_DEFAULT, gicc_base + GICC_PMR);
+
+ vtr = readl_relaxed(gich_base + GICH_VTR);
+ gic_num_lr = (vtr & 0x3f) + 1;
+
+ /* VMCR only contains 5 bits of priority */
+ vmcr = (cell_gicc_pmr >> GICV_PMR_SHIFT) << GICH_VMCR_PMR_SHIFT;
+ /*
+ * All virtual interrupts are group 0 in this driver since the GICV
+ * layout seen by the guest corresponds to GICC without security
+ * extensions:
+ * - A read from GICV_IAR doesn't acknowledge group 1 interrupts
+ * (GICV_AIAR does it, but the guest never attempts to accesses it)
+ * - A write to GICV_CTLR.GRP0EN corresponds to the GICC_CTLR.GRP1EN bit
+ * Since the guest's driver thinks that it is accessing a GIC with
+ * security extensions, a write to GPR1EN will enable group 0
+ * interrups.
+ * - Group 0 interrupts are presented as virtual IRQs (FIQEn = 0)
+ */
+ if (cell_gicc_ctlr & GICC_CTLR_GRPEN1)
+ vmcr |= GICH_VMCR_EN0;
+ if (cell_gicc_ctlr & GICC_CTLR_EOImode)
+ vmcr |= GICH_VMCR_EOImode;
+
+ writel_relaxed(vmcr, gich_base + GICH_VMCR);
+ writel_relaxed(GICH_HCR_EN, gich_base + GICH_HCR);
+
+ return 0;
+}
+
+static void gic_eoi_irq(u32 irq_id, bool deactivate)
+{
+ /*
+ * The GIC doesn't seem to care about the CPUID value written to EOIR,
+ * which is rather convenient...
+ */
+ writel_relaxed(irq_id, gicc_base + GICC_EOIR);
+ if (deactivate)
+ writel_relaxed(irq_id, gicc_base + GICC_DIR);
+}
+
+static void gic_route_spis(struct cell *config_cell, struct cell *dest_cell)
+{
+}
+
+static void gic_cell_init(struct cell *cell)
+{
+ struct jailhouse_memory gicv_region;
+
+ /*
+ * target_cpu_map has not been populated by all available CPUs when the
+ * setup code initialises the root cell. It is assumed that the kernel
+ * already has configured all its SPIs anyway, and that it will redirect
+ * them when unplugging a CPU.
+ */
+ if (cell != &root_cell)
+ gic_route_spis(cell, cell);
+
+ gicv_region.phys_start = (unsigned long)gicv_base;
+ /*
+ * WARN: some SoCs (EXYNOS4) use a modified GIC which doesn't have any
+ * banked CPU interface, so we should map per-CPU physical addresses
+ * here.
+ * As for now, none of them seem to have virtualization extensions.
+ */
+ gicv_region.virt_start = (unsigned long)gicc_base;
+ gicv_region.size = gicc_size;
+ gicv_region.flags = JAILHOUSE_MEM_DMA | JAILHOUSE_MEM_READ
+ | JAILHOUSE_MEM_WRITE;
+
+ /*
+ * Let the guest access the virtual CPU interface instead of the
+ * physical one
+ */
+ arch_map_memory_region(cell, &gicv_region);
+}
+
+static void gic_cell_exit(struct cell *cell)
+{
+ /* Reset interrupt routing of the cell's spis*/
+ gic_route_spis(cell, &root_cell);
+}
+
+static int gic_send_sgi(struct sgi *sgi)
+{
+ u32 val;
+
+ if (!is_sgi(sgi->id))
+ return -EINVAL;
+
+ val = (sgi->routing_mode & 0x3) << 24
+ | (sgi->targets & 0xff) << 16
+ | (sgi->id & 0xf);
+
+ writel_relaxed(val, gicd_base + GICD_SGIR);
+
+ return 0;
+}
+
+static int gic_inject_irq(struct per_cpu *cpu_data, struct pending_irq *irq)
+{
+ int i;
+ int first_free = -1;
+ u32 lr;
+ u64 elsr;
+
+ elsr = readl_relaxed(gich_base + GICH_ELSR0);
+ elsr |= (u64)readl_relaxed(gich_base + GICH_ELSR1) << 32;
+ for (i = 0; i < gic_num_lr; i++) {
+ if (test_bit(i, (unsigned long *)&elsr)) {
+ /* Entry is available */
+ if (first_free == -1)
+ first_free = i;
+ continue;
+ }
+
+ /* Check that there is no overlapping */
+ lr = gic_read_lr(i);
+ if ((lr & GICH_LR_VIRT_ID_MASK) == irq->virt_id)
+ return -EINVAL;
+ }
+
+ if (first_free == -1) {
+ /* Enable maintenance IRQ */
+ u32 hcr;
+ hcr = readl_relaxed(gich_base + GICH_HCR);
+ hcr |= GICH_HCR_UIE;
+ writel_relaxed(hcr, gich_base + GICH_HCR);
+
+ return -EBUSY;
+ }
+
+ /* Inject group 0 interrupt (seen as IRQ by the guest) */
+ lr = irq->virt_id;
+ lr |= GICH_LR_PENDING_BIT;
+
+ if (irq->hw) {
+ lr |= GICH_LR_HW_BIT;
+ lr |= irq->type.irq << GICH_LR_PHYS_ID_SHIFT;
+ } else {
+ lr |= irq->type.sgi.cpuid << GICH_LR_CPUID_SHIFT;
+ if (irq->type.sgi.maintenance)
+ lr |= GICH_LR_SGI_EOI_BIT;
+ }
+
+ gic_write_lr(first_free, lr);
+
+ return 0;
+}
+
+static int gic_mmio_access(struct per_cpu *cpu_data, struct mmio_access *access)
+{
+ void *address = (void *)access->addr;
+
+ if (address >= gicd_base && address < gicd_base + gicd_size)
+ return gic_handle_dist_access(cpu_data, access);
+
+ return TRAP_UNHANDLED;
+}
+
+struct irqchip_ops gic_irqchip = {
+ .init = gic_init,
+ .cpu_init = gic_cpu_init,
+ .cpu_reset = gic_cpu_reset,
+ .cell_init = gic_cell_init,
+ .cell_exit = gic_cell_exit,
+
+ .send_sgi = gic_send_sgi,
+ .handle_irq = gic_handle_irq,
+ .inject_irq = gic_inject_irq,
+ .eoi_irq = gic_eoi_irq,
+ .mmio_access = gic_mmio_access,
+};
diff --git a/hypervisor/arch/arm/include/asm/gic_v2.h b/hypervisor/arch/arm/include/asm/gic_v2.h
new file mode 100644
index 0000000..8bd5e6e
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/gic_v2.h
@@ -0,0 +1,121 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef _JAILHOUSE_ASM_GIC_V2_H
+#define _JAILHOUSE_ASM_GIC_V2_H
+
+#define GICD_CIDR0 0xff0
+#define GICD_CIDR1 0xff4
+#define GICD_CIDR2 0xff8
+#define GICD_CIDR3 0xffc
+
+#define GICD_PIDR0 0xfe0
+#define GICD_PIDR1 0xfe4
+#define GICD_PIDR2 0xfe8
+#define GICD_PIDR3 0xfec
+#define GICD_PIDR4 0xfd0
+#define GICD_PIDR5 0xfd4
+#define GICD_PIDR6 0xfd8
+#define GICD_PIDR7 0xfdc
+
+#define GICC_CTLR 0x0000
+#define GICC_PMR 0x0004
+#define GICC_BPR 0x0008
+#define GICC_IAR 0x000c
+#define GICC_EOIR 0x0010
+#define GICC_RPR 0x0014
+#define GICC_HPPIR 0x0018
+#define GICC_ABPR 0x001c
+#define GICC_AIAR 0x0020
+#define GICC_AEOIR 0x0024
+#define GICC_AHPPIR 0x0028
+#define GICC_APR0 0x00d0
+#define GICC_APR1 0x00d4
+#define GICC_APR2 0x00d8
+#define GICC_APR3 0x00dc
+#define GICC_NSAPR0 0x00e0
+#define GICC_NSAPR1 0x00e4
+#define GICC_NSAPR2 0x00e8
+#define GICC_NSAPR3 0x00ec
+#define GICC_IIDR 0x00fc
+#define GICC_DIR 0x1000
+
+#define GICC_CTLR_GRPEN1 (1 << 0)
+#define GICC_CTLR_EOImode (1 << 9)
+
+#define GICC_PMR_DEFAULT 0xf0
+
+#define GICH_HCR 0x000
+#define GICH_VTR 0x004
+#define GICH_VMCR 0x008
+#define GICH_MISR 0x010
+#define GICH_EISR0 0x020
+#define GICH_EISR1 0x024
+#define GICH_ELSR0 0x030
+#define GICH_ELSR1 0x034
+#define GICH_APR 0x0f0
+#define GICH_LR_BASE 0x100
+
+#define GICV_PMR_SHIFT 3
+#define GICH_VMCR_PMR_SHIFT 27
+#define GICH_VMCR_EN0 (1 << 0)
+#define GICH_VMCR_EN1 (1 << 1)
+#define GICH_VMCR_ACKCtl (1 << 2)
+#define GICH_VMCR_EOImode (1 << 9)
+
+#define GICH_HCR_EN (1 << 0)
+#define GICH_HCR_UIE (1 << 1)
+#define GICH_HCR_LRENPIE (1 << 2)
+#define GICH_HCR_NPIE (1 << 3)
+#define GICH_HCR_VGRP0EIE (1 << 4)
+#define GICH_HCR_VGRP0DIE (1 << 5)
+#define GICH_HCR_VGRP1EIE (1 << 6)
+#define GICH_HCR_VGRP1DIE (1 << 7)
+#define GICH_HCR_EOICOUNT_SHIFT 27
+
+#define GICH_LR_HW_BIT (1 << 31)
+#define GICH_LR_GRP1_BIT (1 << 30)
+#define GICH_LR_ACTIVE_BIT (1 << 29)
+#define GICH_LR_PENDING_BIT (1 << 28)
+#define GICH_LR_PRIORITY_SHIFT 23
+#define GICH_LR_SGI_EOI_BIT (1 << 19)
+#define GICH_LR_CPUID_SHIFT 10
+#define GICH_LR_PHYS_ID_SHIFT 10
+#define GICH_LR_VIRT_ID_MASK 0x3ff
+
+#ifndef __ASSEMBLY__
+
+#include <asm/io.h>
+
+static inline u32 gic_read_lr(unsigned int i)
+{
+ extern void *gich_base;
+
+ return readl_relaxed(gich_base + GICH_LR_BASE + i * 4);
+}
+
+static inline void gic_write_lr(unsigned int i, u32 value)
+{
+ extern void *gich_base;
+
+ writel_relaxed(value, gich_base + GICH_LR_BASE + i * 4);
+}
+
+static inline u32 gic_read_iar(void)
+{
+ extern void *gicc_base;
+
+ return readl_relaxed(gicc_base + GICC_IAR) & 0x3ff;
+}
+
+#endif /* !__ASSEMBLY__ */
+#endif /* _JAILHOUSE_ASM_GIC_V2_H */
diff --git a/hypervisor/arch/arm/include/asm/platform.h b/hypervisor/arch/arm/include/asm/platform.h
index df8575a..9c3def7 100644
--- a/hypervisor/arch/arm/include/asm/platform.h
+++ b/hypervisor/arch/arm/include/asm/platform.h
@@ -34,6 +34,22 @@
# define GICR_SIZE 0x100000

# include <asm/gic_v3.h>
+# else /* GICv2 */
+# define GICD_BASE ((void *)0x2c001000)
+# define GICD_SIZE 0x1000
+# define GICC_BASE ((void *)0x2c002000)
+/*
+ * WARN: most device trees are broken and report only one page for the GICC.
+ * It will brake the handle_irq code, since the GICC_DIR register is located at
+ * offset 0x1000...
+ */
+# define GICC_SIZE 0x2000
+# define GICH_BASE ((void *)0x2c004000)
+# define GICH_SIZE 0x2000
+# define GICV_BASE ((void *)0x2c006000)
+# define GICV_SIZE 0x2000
+
+# include <asm/gic_v2.h>
# endif /* GIC */

# define MAINTENANCE_IRQ 25
diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
index 7f667cc..ae3bce1 100644
--- a/hypervisor/arch/arm/irqchip.c
+++ b/hypervisor/arch/arm/irqchip.c
@@ -310,7 +310,6 @@ int irqchip_init(void)
pidr2 = readl_relaxed(gicd_base + GICD_PIDR2);
switch (GICD_PIDR2_ARCH(pidr2)) {
case 0x2:
- break;
case 0x3:
case 0x4:
memcpy(&irqchip, &gic_irqchip, sizeof(struct irqchip_ops));
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:11 UTC
Permalink
All pending interrupts need to be cleared before running a new guest.
This patch resets the list registers, the software pending queue, and
the GIC hypervisor config registers.
Since the suspend loop was entered through the IRQ handler, we also need
to deactivate that active IPI.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 10 +++++
hypervisor/arch/arm/gic-v3.c | 57 +++++++++++++++++++++++++++--
hypervisor/arch/arm/include/asm/gic_v3.h | 8 ++++
hypervisor/arch/arm/include/asm/irqchip.h | 4 ++
hypervisor/arch/arm/irqchip.c | 31 ++++++++++++++--
5 files changed, 103 insertions(+), 7 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index af41050..67ce85f 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -26,6 +26,16 @@ static void arch_reset_self(struct per_cpu *cpu_data)
if (err)
printk("MMU setup failed\n");

+ /*
+ * We come from the IRQ handler, but we won't return there, so the IPI
+ * is deactivated here.
+ */
+ irqchip_eoi_irq(SGI_CPU_OFF, true);
+
+ err = irqchip_cpu_reset(cpu_data);
+ if (err)
+ printk("IRQ setup failed\n");
+
arm_write_banked_reg(ELR_hyp, 0);
arm_write_banked_reg(SPSR_hyp, RESET_PSR);
memset(regs, 0, sizeof(struct registers));
diff --git a/hypervisor/arch/arm/gic-v3.c b/hypervisor/arch/arm/gic-v3.c
index d67e59c..f4f88ff 100644
--- a/hypervisor/arch/arm/gic-v3.c
+++ b/hypervisor/arch/arm/gic-v3.c
@@ -30,6 +30,7 @@
*/

static unsigned int gic_num_lr;
+static unsigned int gic_num_priority_bits;

static void *gicr_base;
static unsigned int gicr_size;
@@ -48,6 +49,47 @@ static int gic_init(void)
return err;
}

+static int gic_cpu_reset(struct per_cpu *cpu_data)
+{
+ unsigned int i;
+ void *gicr = cpu_data->gicr_base;
+ unsigned long active;
+
+ if (gicr == 0)
+ return -ENODEV;
+
+ /* Clear list registers */
+ for (i = 0; i < gic_num_lr; i++)
+ gic_write_lr(i, 0);
+
+ gicr += GICR_SGI_BASE;
+ active = readl_relaxed(gicr + GICR_ICACTIVER);
+ /* Deactivate all active PPIs */
+ for (i = 16; i < 32; i++) {
+ if (test_bit(i, &active))
+ arm_write_sysreg(ICC_DIR_EL1, i);
+ }
+
+ /* Disable all PPIs, ensure IPIs are enabled */
+ writel_relaxed(0xffff0000, gicr + GICR_ICENABLER);
+ writel_relaxed(0x0000ffff, gicr + GICR_ISENABLER);
+
+ /* Clear active priority bits */
+ if (gic_num_priority_bits >= 5)
+ arm_write_sysreg(ICH_AP1R0_EL2, 0);
+ if (gic_num_priority_bits >= 6)
+ arm_write_sysreg(ICH_AP1R1_EL2, 0);
+ if (gic_num_priority_bits > 6) {
+ arm_write_sysreg(ICH_AP1R2_EL2, 0);
+ arm_write_sysreg(ICH_AP1R3_EL2, 0);
+ }
+
+ arm_write_sysreg(ICH_VMCR_EL2, 0);
+ arm_write_sysreg(ICH_HCR_EL2, ICH_HCR_EN);
+
+ return 0;
+}
+
static int gic_cpu_init(struct per_cpu *cpu_data)
{
u64 typer;
@@ -104,6 +146,7 @@ static int gic_cpu_init(struct per_cpu *cpu_data)

arm_read_sysreg(ICH_VTR_EL2, ich_vtr);
gic_num_lr = (ich_vtr & 0xf) + 1;
+ gic_num_priority_bits = (ich_vtr >> 29) + 1;

ich_vmcr = (cell_icc_pmr & ICC_PMR_MASK) << ICH_VMCR_VPMR_SHIFT;
if (cell_icc_igrpen1 & ICC_IGRPEN1_EN)
@@ -200,6 +243,13 @@ static bool arch_handle_phys_irq(struct per_cpu *cpu_data, u32 irqn)
return false;
}

+static void gic_eoi_irq(u32 irq_id, bool deactivate)
+{
+ arm_write_sysreg(ICC_EOIR1_EL1, irq_id);
+ if (deactivate)
+ arm_write_sysreg(ICC_DIR_EL1, irq_id);
+}
+
static void gic_handle_irq(struct per_cpu *cpu_data)
{
bool handled = false;
@@ -226,10 +276,7 @@ static void gic_handle_irq(struct per_cpu *cpu_data)
* This allows to not be re-interrupted by a level-triggered
* interrupt that needs handling in the guest (e.g. timer)
*/
- arm_write_sysreg(ICC_EOIR1_EL1, irq_id);
- /* Deactivate if necessary */
- if (handled)
- arm_write_sysreg(ICC_DIR_EL1, irq_id);
+ gic_eoi_irq(irq_id, handled);
}
}

@@ -295,7 +342,9 @@ static int gic_inject_irq(struct per_cpu *cpu_data, struct pending_irq *irq)
struct irqchip_ops gic_irqchip = {
.init = gic_init,
.cpu_init = gic_cpu_init,
+ .cpu_reset = gic_cpu_reset,
.send_sgi = gic_send_sgi,
.handle_irq = gic_handle_irq,
.inject_irq = gic_inject_irq,
+ .eoi_irq = gic_eoi_irq,
};
diff --git a/hypervisor/arch/arm/include/asm/gic_v3.h b/hypervisor/arch/arm/include/asm/gic_v3.h
index edc8767..3318398 100644
--- a/hypervisor/arch/arm/include/asm/gic_v3.h
+++ b/hypervisor/arch/arm/include/asm/gic_v3.h
@@ -68,6 +68,10 @@
#define ICC_SRE_EL2 SYSREG_32(4, c12, c9, 5)
#define ICC_IGRPEN1_EL1 SYSREG_32(0, c12, c12, 7)
#define ICC_SGI1R_EL1 SYSREG_64(0, c12)
+#define ICC_AP1R0_EL1 SYSREG_32(0, c12, c9, 0)
+#define ICC_AP1R1_EL1 SYSREG_32(0, c12, c9, 1)
+#define ICC_AP1R2_EL1 SYSREG_32(0, c12, c9, 2)
+#define ICC_AP1R3_EL1 SYSREG_32(0, c12, c9, 3)

#define ICH_HCR_EL2 SYSREG_32(4, c12, c11, 0)
#define ICH_VTR_EL2 SYSREG_32(4, c12, c11, 1)
@@ -75,6 +79,10 @@
#define ICH_EISR_EL2 SYSREG_32(4, c12, c11, 3)
#define ICH_ELSR_EL2 SYSREG_32(4, c12, c11, 5)
#define ICH_VMCR_EL2 SYSREG_32(4, c12, c11, 7)
+#define ICH_AP1R0_EL2 SYSREG_32(4, c12, c9, 0)
+#define ICH_AP1R1_EL2 SYSREG_32(4, c12, c9, 1)
+#define ICH_AP1R2_EL2 SYSREG_32(4, c12, c9, 2)
+#define ICH_AP1R3_EL2 SYSREG_32(4, c12, c9, 3)

/* Different on AArch32 and AArch64... */
#define __ICH_LR0(x) SYSREG_32(4, c12, c12, x)
diff --git a/hypervisor/arch/arm/include/asm/irqchip.h b/hypervisor/arch/arm/include/asm/irqchip.h
index a6b05e4..bdb7b99 100644
--- a/hypervisor/arch/arm/include/asm/irqchip.h
+++ b/hypervisor/arch/arm/include/asm/irqchip.h
@@ -45,9 +45,11 @@ struct sgi {
struct irqchip_ops {
int (*init)(void);
int (*cpu_init)(struct per_cpu *cpu_data);
+ int (*cpu_reset)(struct per_cpu *cpu_data);

int (*send_sgi)(struct sgi *sgi);
void (*handle_irq)(struct per_cpu *cpu_data);
+ void (*eoi_irq)(u32 irqn, bool deactivate);
int (*inject_irq)(struct per_cpu *cpu_data, struct pending_irq *irq);
};

@@ -74,9 +76,11 @@ struct pending_irq {

int irqchip_init(void);
int irqchip_cpu_init(struct per_cpu *cpu_data);
+int irqchip_cpu_reset(struct per_cpu *cpu_data);

int irqchip_send_sgi(struct sgi *sgi);
void irqchip_handle_irq(struct per_cpu *cpu_data);
+void irqchip_eoi_irq(u32 irqn, bool deactivate);

int irqchip_inject_pending(struct per_cpu *cpu_data);
int irqchip_insert_pending(struct per_cpu *cpu_data, struct pending_irq *irq);
diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
index 41f9754..16ae482 100644
--- a/hypervisor/arch/arm/irqchip.c
+++ b/hypervisor/arch/arm/irqchip.c
@@ -33,10 +33,16 @@ static struct irqchip_ops irqchip;

static int irqchip_init_pending(struct per_cpu *cpu_data)
{
- struct pending_irq *pend_array = page_alloc(&mem_pool, 1);
+ struct pending_irq *pend_array;
+
+ if (cpu_data->pending_irqs == NULL) {
+ cpu_data->pending_irqs = pend_array = page_alloc(&mem_pool, 1);
+ if (pend_array == NULL)
+ return -ENOMEM;
+ } else {
+ pend_array = cpu_data->pending_irqs;
+ }

- if (pend_array == NULL)
- return -ENOMEM;
memset(pend_array, 0, PAGE_SIZE);

cpu_data->pending_irqs = pend_array;
@@ -179,6 +185,11 @@ void irqchip_handle_irq(struct per_cpu *cpu_data)
irqchip.handle_irq(cpu_data);
}

+void irqchip_eoi_irq(u32 irqn, bool deactivate)
+{
+ irqchip.eoi_irq(irqn, deactivate);
+}
+
int irqchip_send_sgi(struct sgi *sgi)
{
return irqchip.send_sgi(sgi);
@@ -198,6 +209,20 @@ int irqchip_cpu_init(struct per_cpu *cpu_data)
return 0;
}

+int irqchip_cpu_reset(struct per_cpu *cpu_data)
+{
+ int err;
+
+ err = irqchip_init_pending(cpu_data);
+ if (err)
+ return err;
+
+ if (irqchip.cpu_reset)
+ return irqchip.cpu_reset(cpu_data);
+
+ return 0;
+}
+
/* Only the GIC is implemented */
extern struct irqchip_ops gic_irqchip;
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:28 UTC
Permalink
When an HV_DISABLE hypercall is issued on all root CPUs by the driver,
the core `shutdown' function executes the following operations:
- Suspend all non-root cells (all the CPUs are taken to hyp idle mode),
- call arch_shutdown_cpu for all those CPUs,
- call arch_shutdown.
Once the master CPU (the first to take the shutdown lock) did this, the
other root CPUs don't actually perform any operation.

This patch lets the arch_shutdown and arch_shutdown_cpu set a boolean
that is considered by the cores right before returning to EL1: for the
cells' CPUs, arch_shutdown_cpu will trigger a return to arch_reset_self,
that will clean up EL1 and EL2. On the root cpus, the exit handler
checks this boolean and calls the shutdown function.

Once inside arch_shutdown_self, the principle is the same as with the
hypervisor initialisation:
- Create identity mappings of the trampoline page and the stack,
- Jump to the physical address of the shutdown function,
- Disable the MMU,
- Reset the vectors,
- Return to EL1

This patch does not handle hosts using PSCI yet: they will need to issue
a final SMC on secondary CPUs in order to park themselves at EL3, since
the hypervisor won't exist anymore to emulate the wakeup call.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 25 ++++++++---
hypervisor/arch/arm/include/asm/control.h | 2 +
hypervisor/arch/arm/include/asm/percpu.h | 1 +
hypervisor/arch/arm/mmu_hyp.c | 67 +++++++++++++++++++++++++++++
hypervisor/arch/arm/setup.c | 63 +++++++++++++++++++++++----
5 files changed, 144 insertions(+), 14 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 8d123f0..58f6bed 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -90,12 +90,14 @@ static void arch_reset_el1(struct registers *regs)

void arch_reset_self(struct per_cpu *cpu_data)
{
- int err;
+ int err = 0;
unsigned long reset_address;
struct cell *cell = cpu_data->cell;
struct registers *regs = guest_regs(cpu_data);
+ bool is_shutdown = cpu_data->shutdown;

- err = arch_mmu_cpu_cell_init(cpu_data);
+ if (!is_shutdown)
+ err = arch_mmu_cpu_cell_init(cpu_data);
if (err)
printk("MMU setup failed\n");
/*
@@ -112,12 +114,15 @@ void arch_reset_self(struct per_cpu *cpu_data)
*/
irqchip_eoi_irq(SGI_CPU_OFF, true);

- err = irqchip_cpu_reset(cpu_data);
- if (err)
- printk("IRQ setup failed\n");
+ /* irqchip_cpu_shutdown already resets the GIC on all CPUs. */
+ if (!is_shutdown) {
+ err = irqchip_cpu_reset(cpu_data);
+ if (err)
+ printk("IRQ setup failed\n");
+ }

/* Wait for the driver to call cpu_up */
- if (cell == &root_cell)
+ if (cell == &root_cell || is_shutdown)
reset_address = arch_smp_spin(cpu_data, root_cell.arch.smp);
else
reset_address = arch_smp_spin(cpu_data, cell->arch.smp);
@@ -131,6 +136,10 @@ void arch_reset_self(struct per_cpu *cpu_data)
arm_write_banked_reg(ELR_hyp, reset_address);
arm_write_banked_reg(SPSR_hyp, RESET_PSR);

+ if (is_shutdown)
+ /* Won't return here. */
+ arch_shutdown_self(cpu_data);
+
vmreturn(regs);
}

@@ -197,6 +206,10 @@ struct registers* arch_handle_exit(struct per_cpu *cpu_data,
panic_stop(cpu_data);
}

+ if (cpu_data->shutdown)
+ /* Won't return here. */
+ arch_shutdown_self(cpu_data);
+
return regs;
}

diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index f1842ff..7bff77f 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -36,8 +36,10 @@ void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs);
struct registers* arch_handle_exit(struct per_cpu *cpu_data,
struct registers *regs);
void arch_reset_self(struct per_cpu *cpu_data);
+void arch_shutdown_self(struct per_cpu *cpu_data);

void __attribute__((noreturn)) vmreturn(struct registers *guest_regs);
+void __attribute__((noreturn)) arch_shutdown_mmu(struct per_cpu *cpu_data);

#endif /* !__ASSEMBLY__ */

diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index 69873b5..e71d647 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -63,6 +63,7 @@ struct per_cpu {
bool cpu_stopped;
bool cell_pages_dirty;
int shutdown_state;
+ bool shutdown;
bool failed;
} __attribute__((aligned(PAGE_SIZE)));

diff --git a/hypervisor/arch/arm/mmu_hyp.c b/hypervisor/arch/arm/mmu_hyp.c
index 38eacbd..509d76b 100644
--- a/hypervisor/arch/arm/mmu_hyp.c
+++ b/hypervisor/arch/arm/mmu_hyp.c
@@ -154,6 +154,33 @@ setup_mmu_el2(struct per_cpu *cpu_data, phys2virt_t phys2virt, u64 ttbr)
asm volatile("b .\n");
}

+/*
+ * Shutdown the MMU and returns to EL1 with the kernel context stored in `regs'
+ */
+static void __attribute__((naked)) __attribute__((section(".trampoline")))
+shutdown_el2(struct registers *regs, unsigned long vectors)
+{
+ u32 sctlr_el2;
+
+ /* Disable stage-1 translation, caches must be cleaned. */
+ arm_read_sysreg(SCTLR_EL2, sctlr_el2);
+ sctlr_el2 &= ~(SCTLR_M_BIT | SCTLR_C_BIT | SCTLR_I_BIT);
+ arm_write_sysreg(SCTLR_EL2, sctlr_el2);
+ isb();
+
+ /* Clean the MMU registers */
+ arm_write_sysreg(HMAIR0, 0);
+ arm_write_sysreg(HMAIR1, 0);
+ arm_write_sysreg(TTBR0_EL2, 0);
+ arm_write_sysreg(TCR_EL2, 0);
+ isb();
+
+ /* Reset the vectors as late as possible */
+ arm_write_sysreg(HVBAR, vectors);
+
+ vmreturn(regs);
+}
+
static void check_mmu_map(unsigned long virt_addr, unsigned long phys_addr)
{
unsigned long phys_base;
@@ -251,6 +278,46 @@ int switch_exception_level(struct per_cpu *cpu_data)
return 0;
}

+void __attribute__((noreturn)) arch_shutdown_mmu(struct per_cpu *cpu_data)
+{
+ static DEFINE_SPINLOCK(map_lock);
+
+ virt2phys_t virt2phys = page_map_hvirt2phys;
+ void *stack_virt = cpu_data->stack;
+ unsigned long stack_phys = virt2phys((void *)stack_virt);
+ unsigned long trampoline_phys = virt2phys((void *)&trampoline_start);
+ struct registers *regs_phys =
+ (struct registers *)virt2phys(guest_regs(cpu_data));
+
+ /* Jump to the identity-mapped trampoline page before shutting down */
+ void (*shutdown_fun_phys)(struct registers*, unsigned long);
+ shutdown_fun_phys = (void*)virt2phys(shutdown_el2);
+
+ /*
+ * No need to check for size or overlapping here, it has already be
+ * done, and the paging structures will soon be deleted. However, the
+ * cells' CPUs may execute this concurrently.
+ */
+ spin_lock(&map_lock);
+ page_map_create(&hv_paging_structs, stack_phys, PAGE_SIZE, stack_phys,
+ PAGE_DEFAULT_FLAGS, PAGE_MAP_NON_COHERENT);
+ page_map_create(&hv_paging_structs, trampoline_phys, PAGE_SIZE,
+ trampoline_phys, PAGE_DEFAULT_FLAGS, PAGE_MAP_NON_COHERENT);
+ spin_unlock(&map_lock);
+
+ arch_cpu_dcaches_flush(CACHES_CLEAN);
+
+ /*
+ * Final shutdown:
+ * - disable the MMU whilst inside the trampoline page
+ * - reset the vectors
+ * - return to EL1
+ */
+ shutdown_fun_phys(regs_phys, saved_vectors);
+
+ __builtin_unreachable();
+}
+
int arch_map_device(void *paddr, void *vaddr, unsigned long size)
{
return page_map_create(&hv_paging_structs, (unsigned long)paddr, size,
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 1998e12..d4785b8 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -118,14 +118,61 @@ void arch_cpu_activate_vmm(struct per_cpu *cpu_data)
while (1);
}

-void arch_cpu_restore(struct per_cpu *cpu_data)
+void arch_shutdown_self(struct per_cpu *cpu_data)
{
+ irqchip_cpu_shutdown(cpu_data);
+
+ /* Free the guest */
+ arm_write_sysreg(HCR, 0);
+ arm_write_sysreg(TPIDR_EL2, 0);
+ arm_write_sysreg(VTCR_EL2, 0);
+
+ /* Remove stage-2 mappings */
+ arch_cpu_tlb_flush(cpu_data);
+
+ /* TLB flush needs the cell's VMID */
+ isb();
+ arm_write_sysreg(VTTBR_EL2, 0);
+
+ /* Return to EL1 */
+ arch_shutdown_mmu(cpu_data);
}

-// catch missing symbols
-#include <jailhouse/printk.h>
-#include <jailhouse/processor.h>
-#include <jailhouse/control.h>
-#include <jailhouse/string.h>
-void arch_shutdown_cpu(unsigned int cpu_id) {}
-void arch_shutdown(void) {}
+/*
+ * This handler is only used for cells, not for the root. The core already
+ * issued a cpu_suspend. arch_reset_cpu will cause arch_reset_self to be
+ * called on that CPU, which will in turn call arch_shutdown_self.
+ */
+void arch_shutdown_cpu(unsigned int cpu_id)
+{
+ struct per_cpu *cpu_data = per_cpu(cpu_id);
+
+ cpu_data->virt_id = cpu_id;
+ cpu_data->shutdown = true;
+
+ if (psci_wait_cpu_stopped(cpu_id))
+ printk("FATAL: unable to stop CPU%d\n", cpu_id);
+
+ arch_reset_cpu(cpu_id);
+}
+
+void arch_shutdown(void)
+{
+ unsigned int cpu;
+ struct cell *cell = root_cell.next;
+
+ /* Re-route each SPI to CPU0 */
+ for (; cell != NULL; cell = cell->next)
+ irqchip_cell_exit(cell);
+
+ /*
+ * Let the exit handler call reset_self to let the core finish its
+ * shutdown function and release its lock.
+ */
+ for_each_cpu(cpu, root_cell.cpu_set)
+ per_cpu(cpu)->shutdown = true;
+}
+
+void arch_cpu_restore(struct per_cpu *cpu_data)
+{
+}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:12 UTC
Permalink
Give the CPUs back to Linux when a cell is destroyed, after resetting its
whole context.

This patch uses the old vexpress CPU hotplug system in Linux: the
secondary startup function address is kept in the system flags, so the
cpu_reset function will simply jump there after resetting the CPU state.
A future patch will add PSCI hotplug support, and once the hypervisor
has access to a device tree, this spin function will need to be set
dynamically in a `struct hotplug_ops'.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 2 +-
hypervisor/arch/arm/control.c | 19 ++++++++-
hypervisor/arch/arm/include/asm/control.h | 3 ++
hypervisor/arch/arm/include/asm/platform.h | 7 ++++
hypervisor/arch/arm/mmu_cell.c | 5 +++
hypervisor/arch/arm/setup.c | 5 ++-
hypervisor/arch/arm/spin.c | 59 ++++++++++++++++++++++++++++
7 files changed, 97 insertions(+), 3 deletions(-)
create mode 100644 hypervisor/arch/arm/spin.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 6ad6b47..6445d15 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -16,7 +16,7 @@ always := built-in.o

obj-y := entry.o dbg-write.o exception.o setup.o control.o lib.o traps.o
obj-y += paging.o mmu_hyp.o mmu_cell.o
-obj-y += psci.o psci_low.o
+obj-y += psci.o psci_low.o spin.o
obj-y += irqchip.o
obj-$(CONFIG_ARM_GIC_V3) += gic-v3.o
obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o
diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 67ce85f..9df8f04 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -20,6 +20,7 @@
static void arch_reset_self(struct per_cpu *cpu_data)
{
int err;
+ unsigned long reset_address;
struct registers *regs = guest_regs(cpu_data);

err = arch_mmu_cpu_cell_init(cpu_data);
@@ -36,7 +37,13 @@ static void arch_reset_self(struct per_cpu *cpu_data)
if (err)
printk("IRQ setup failed\n");

- arm_write_banked_reg(ELR_hyp, 0);
+ if (cpu_data->cell == &root_cell)
+ /* Wait for the driver to call cpu_up */
+ reset_address = arch_cpu_spin();
+ else
+ reset_address = 0;
+
+ arm_write_banked_reg(ELR_hyp, reset_address);
arm_write_banked_reg(SPSR_hyp, RESET_PSR);
memset(regs, 0, sizeof(struct registers));

@@ -140,3 +147,13 @@ int arch_cell_create(struct per_cpu *cpu_data, struct cell *cell)

return 0;
}
+
+void arch_cell_destroy(struct per_cpu *cpu_data, struct cell *cell)
+{
+ unsigned int cpu;
+
+ arch_mmu_cell_destroy(cell);
+
+ for_each_cpu(cpu, cell->cpu_set)
+ arch_reset_cpu(cpu);
+}
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index a58ba90..78ecbd6 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -22,9 +22,12 @@
#ifndef __ASSEMBLY__

int arch_mmu_cell_init(struct cell *cell);
+void arch_mmu_cell_destroy(struct cell *cell);
int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data);
void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn);
void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs);
+int arch_spin_init(void);
+unsigned long arch_cpu_spin(void);
struct registers* arch_handle_exit(struct per_cpu *cpu_data,
struct registers *regs);

diff --git a/hypervisor/arch/arm/include/asm/platform.h b/hypervisor/arch/arm/include/asm/platform.h
index 8316689..d748ba3 100644
--- a/hypervisor/arch/arm/include/asm/platform.h
+++ b/hypervisor/arch/arm/include/asm/platform.h
@@ -37,7 +37,14 @@
# endif /* GIC */

# define MAINTENANCE_IRQ 25
+# define HOTPLUG_MBOX ((void *)0x1c010030)

#endif /* CONFIG_ARCH_VEXPRESS */
+
+#define HOTPLUG_SPIN 1
+/*
+#define HOTPLUG_PSCI 1
+*/
+
#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_PLATFORM_H */
diff --git a/hypervisor/arch/arm/mmu_cell.c b/hypervisor/arch/arm/mmu_cell.c
index fcd977a..968ca3a 100644
--- a/hypervisor/arch/arm/mmu_cell.c
+++ b/hypervisor/arch/arm/mmu_cell.c
@@ -68,6 +68,11 @@ int arch_mmu_cell_init(struct cell *cell)
return 0;
}

+void arch_mmu_cell_destroy(struct cell *cell)
+{
+ page_free(&mem_pool, cell->arch.mm.root_table, 1);
+}
+
int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data)
{
struct cell *cell = cpu_data->cell;
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index e0ff667..ebd1716 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -76,6 +76,10 @@ int arch_cpu_init(struct per_cpu *cpu_data)
/* Setup guest traps */
arm_write_sysreg(HCR, hcr);

+ err = arch_spin_init();
+ if (err)
+ return err;
+
err = arch_mmu_cpu_cell_init(cpu_data);
if (err)
return err;
@@ -112,7 +116,6 @@ void arch_cpu_restore(struct per_cpu *cpu_data)
#include <jailhouse/control.h>
#include <jailhouse/string.h>
void arch_shutdown_cpu(unsigned int cpu_id) {}
-void arch_cell_destroy(struct per_cpu *cpu_data, struct cell *new_cell) {}
void arch_config_commit(struct per_cpu *cpu_data,
struct cell *cell_added_removed) {}
void arch_shutdown(void) {}
diff --git a/hypervisor/arch/arm/spin.c b/hypervisor/arch/arm/spin.c
new file mode 100644
index 0000000..07ba22d
--- /dev/null
+++ b/hypervisor/arch/arm/spin.c
@@ -0,0 +1,59 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/io.h>
+#include <asm/platform.h>
+#include <asm/setup.h>
+#include <asm/control.h>
+#include <jailhouse/printk.h>
+
+#if HOTPLUG_SPIN == 1
+int arch_spin_init(void)
+{
+ unsigned long mbox = (unsigned long)HOTPLUG_MBOX;
+ void *mbox_page = (void *)(mbox & PAGE_MASK);
+ int err = arch_map_device(mbox_page, mbox_page, PAGE_SIZE);
+
+ if (err)
+ printk("Unable to map spin mbox page\n");
+
+ return err;
+}
+
+
+unsigned long arch_cpu_spin(void)
+{
+ u32 address;
+
+ /*
+ * This is super-dodgy: we assume nothing wrote to the flag register
+ * since the kernel called smp_prepare_cpus, at initialisation.
+ */
+ do {
+ wfe();
+ address = readl_relaxed((void *)HOTPLUG_MBOX);
+ cpu_relax();
+ } while (address == 0);
+
+ return address;
+}
+
+#elif HOTPLUG_PSCI == 1
+int arch_spin_init(void)
+{
+}
+
+unsigned long arch_cpu_spin(void)
+{
+ /* FIXME: wait for a PSCI hvc */
+}
+#endif
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:10 UTC
Permalink
This patch implements most CPU handling functions needed to setup and start
a new cell.
- When the core calls suspend_cpu, an SGI is sent to this CPU.
Since all IRQs are taken directly to the hypervisor, the guest will be
interrupted and execution will continue into psci_suspend.
- resume_cpu will simply call psci_cpu_on and return from to the IRQ
handling loop.
- reset_cpu will resume the CPU into arch_reset_self.
After resetting all relevant registers and devices, execution will
continue into the newly created guest, by calling vmresume with a
clean set of registers.

To keep this patch light, most of the reset code is still missing.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 106 +++++++++++++++++++++++++--
hypervisor/arch/arm/exception.S | 9 ++-
hypervisor/arch/arm/include/asm/control.h | 6 +-
hypervisor/arch/arm/include/asm/percpu.h | 14 ++--
hypervisor/arch/arm/include/asm/processor.h | 3 +
hypervisor/arch/arm/setup.c | 6 --
6 files changed, 123 insertions(+), 21 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 6cdb133..af41050 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -12,18 +12,35 @@

#include <asm/control.h>
#include <asm/irqchip.h>
+#include <asm/traps.h>
+#include <jailhouse/control.h>
#include <jailhouse/printk.h>
+#include <jailhouse/string.h>

-void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
+static void arch_reset_self(struct per_cpu *cpu_data)
{
- switch (irqn) {
- case SGI_INJECT:
- irqchip_inject_pending(cpu_data);
- break;
- }
+ int err;
+ struct registers *regs = guest_regs(cpu_data);
+
+ err = arch_mmu_cpu_cell_init(cpu_data);
+ if (err)
+ printk("MMU setup failed\n");
+
+ arm_write_banked_reg(ELR_hyp, 0);
+ arm_write_banked_reg(SPSR_hyp, RESET_PSR);
+ memset(regs, 0, sizeof(struct registers));
+
+ /* Restore an empty context */
+ vmreturn(regs);
+}
+
+static void arch_suspend_self(struct per_cpu *cpu_data)
+{
+ psci_suspend(cpu_data);
}

-void arch_handle_exit(struct per_cpu *cpu_data, struct registers *regs)
+struct registers* arch_handle_exit(struct per_cpu *cpu_data,
+ struct registers *regs)
{
switch (regs->exit_reason) {
case EXIT_REASON_IRQ:
@@ -37,4 +54,79 @@ void arch_handle_exit(struct per_cpu *cpu_data, struct registers *regs)
regs->exit_reason);
while(1);
}
+
+ return regs;
+}
+
+/* CPU must be stopped */
+void arch_resume_cpu(unsigned int cpu_id)
+{
+ /*
+ * Simply get out of the spin loop by returning to handle_sgi
+ * If the CPU is being reset, it already has left the PSCI idle loop.
+ */
+ if (psci_cpu_stopped(cpu_id))
+ psci_resume(cpu_id);
+}
+
+/* CPU must be stopped */
+void arch_park_cpu(unsigned int cpu_id)
+{
+ /*
+ * Reset always follows park_cpu, so we just need to make sure that the
+ * CPU is suspended
+ */
+ if (psci_wait_cpu_stopped(cpu_id) != 0)
+ printk("ERROR: CPU%d is supposed to be stopped\n", cpu_id);
+}
+
+/* CPU must be stopped */
+void arch_reset_cpu(unsigned int cpu_id)
+{
+ unsigned long cpu_data = (unsigned long)per_cpu(cpu_id);
+
+ if (psci_cpu_on(cpu_id, (unsigned long)arch_reset_self, cpu_data))
+ printk("ERROR: unable to reset CPU%d (was running)\n", cpu_id);
+}
+
+void arch_suspend_cpu(unsigned int cpu_id)
+{
+ struct sgi sgi;
+
+ if (psci_cpu_stopped(cpu_id) != 0)
+ return;
+
+ sgi.routing_mode = 0;
+ sgi.aff1 = 0;
+ sgi.aff2 = 0;
+ sgi.aff3 = 0;
+ sgi.targets = 1 << cpu_id;
+ sgi.id = SGI_CPU_OFF;
+
+ irqchip_send_sgi(&sgi);
+}
+
+void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn)
+{
+ switch (irqn) {
+ case SGI_INJECT:
+ irqchip_inject_pending(cpu_data);
+ break;
+ case SGI_CPU_OFF:
+ arch_suspend_self(cpu_data);
+ break;
+ default:
+ printk("WARN: unknown SGI received %d\n", irqn);
+ }
+}
+
+int arch_cell_create(struct per_cpu *cpu_data, struct cell *cell)
+{
+ int err;
+
+ err = arch_mmu_cell_init(cell);
+ if (err)
+ return err;
+
+ return 0;
}
diff --git a/hypervisor/arch/arm/exception.S b/hypervisor/arch/arm/exception.S
index 230da47..6190098 100644
--- a/hypervisor/arch/arm/exception.S
+++ b/hypervisor/arch/arm/exception.S
@@ -46,7 +46,14 @@ vmexit_common:
mov r1, sp
bl arch_handle_exit

- add sp, sp, #4
+ /*
+ * Because the hypervisor may call vmreturn to reset the stack,
+ * arch_handle_exit has to return with the guest registers in r0
+ */
+.globl vmreturn
+vmreturn:
+ mov sp, r0
+ add sp, #4

/* Restore usr regs */
pop {r0-r12, lr}
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index ed571a2..a58ba90 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -17,6 +17,7 @@
#include <asm/percpu.h>

#define SGI_INJECT 0
+#define SGI_CPU_OFF 1

#ifndef __ASSEMBLY__

@@ -24,7 +25,10 @@ int arch_mmu_cell_init(struct cell *cell);
int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data);
void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn);
void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs);
-void arch_handle_exit(struct per_cpu *cpu_data, struct registers *guest_regs);
+struct registers* arch_handle_exit(struct per_cpu *cpu_data,
+ struct registers *regs);
+
+void __attribute__((noreturn)) vmreturn(struct registers *guest_regs);

#endif /* !__ASSEMBLY__ */

diff --git a/hypervisor/arch/arm/include/asm/percpu.h b/hypervisor/arch/arm/include/asm/percpu.h
index 53bf97f..e1c198c 100644
--- a/hypervisor/arch/arm/include/asm/percpu.h
+++ b/hypervisor/arch/arm/include/asm/percpu.h
@@ -57,13 +57,8 @@ struct per_cpu {
/* The mbox will be accessed with a ldrd, which requires alignment */
__attribute__((aligned(8))) struct psci_mbox psci_mbox;

- volatile bool stop_cpu;
- volatile bool wait_for_sipi;
- volatile bool cpu_stopped;
- bool init_signaled;
- int sipi_vector;
+ bool cpu_stopped;
bool flush_caches;
- bool shutdown_cpu;
int shutdown_state;
bool failed;
} __attribute__((aligned(PAGE_SIZE)));
@@ -75,6 +70,13 @@ static inline struct per_cpu *per_cpu(unsigned int cpu)
return (struct per_cpu *)(__page_pool + (cpu << PERCPU_SIZE_SHIFT));
}

+static inline struct registers *guest_regs(struct per_cpu *cpu_data)
+{
+ /* Assumes that the trap handler is entered with an empty stack */
+ return (struct registers *)(cpu_data->stack + PERCPU_STACK_END
+ - sizeof(struct registers));
+}
+
/* Validate defines */
#define CHECK_ASSUMPTION(assume) ((void)sizeof(char[1 - 2*!(assume)]))

diff --git a/hypervisor/arch/arm/include/asm/processor.h b/hypervisor/arch/arm/include/asm/processor.h
index 599f4f6..00ffcf0 100644
--- a/hypervisor/arch/arm/include/asm/processor.h
+++ b/hypervisor/arch/arm/include/asm/processor.h
@@ -34,6 +34,9 @@
#define PSR_IT_MASK(it) (((it) & 0x3) << 25 | ((it) & 0xfc) << 8)
#define PSR_IT(psr) (((psr) >> 25 & 0x3) | ((psr) >> 8 & 0xfc))

+#define RESET_PSR (PSR_I_BIT | PSR_F_BIT | PSR_A_BIT | PSR_SVC_MODE \
+ | PSR_32_BIT)
+
#define MPIDR_CPUID_MASK 0x00ffffff

#define PFR1_VIRT(pfr) ((pfr) >> 12 & 0xf)
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index e7a0845..e0ff667 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -111,13 +111,7 @@ void arch_cpu_restore(struct per_cpu *cpu_data)
#include <jailhouse/processor.h>
#include <jailhouse/control.h>
#include <jailhouse/string.h>
-void arch_suspend_cpu(unsigned int cpu_id) {}
-void arch_resume_cpu(unsigned int cpu_id) {}
-void arch_reset_cpu(unsigned int cpu_id) {}
-void arch_park_cpu(unsigned int cpu_id) {}
void arch_shutdown_cpu(unsigned int cpu_id) {}
-int arch_cell_create(struct per_cpu *cpu_data, struct cell *new_cell)
-{ return -ENOSYS; }
void arch_cell_destroy(struct per_cpu *cpu_data, struct cell *new_cell) {}
void arch_config_commit(struct per_cpu *cpu_data,
struct cell *cell_added_removed) {}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:25 UTC
Permalink
The Auxiliary Control Register may be used on some platforms to disable
memory coherency between the cores, for instance when unplugging a CPU.
This patch ensures that ACTLR is never modified, by trapping its accesses
with the HCR.TAC bit.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 1 -
hypervisor/arch/arm/setup.c | 2 +-
hypervisor/arch/arm/traps.c | 25 +++++++++++++++++++++++++
3 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 13d48ce..8d123f0 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -56,7 +56,6 @@ static void arch_reset_el1(struct registers *regs)
arm_read_sysreg(SCTLR_EL1, sctlr);
sctlr = sctlr & ~SCTLR_MASK;
arm_write_sysreg(SCTLR_EL1, sctlr);
- arm_write_sysreg(ACTLR_EL1, 0);
arm_write_sysreg(CPACR_EL1, 0);
arm_write_sysreg(CONTEXTIDR_EL1, 0);
arm_write_sysreg(PAR_EL1, 0);
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index 8a757c0..1998e12 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -60,7 +60,7 @@ int arch_cpu_init(struct per_cpu *cpu_data)
{
int err = 0;
unsigned long hcr = HCR_VM_BIT | HCR_IMO_BIT | HCR_FMO_BIT
- | HCR_TSC_BIT;
+ | HCR_TSC_BIT | HCR_TAC_BIT;

cpu_data->psci_mbox.entry = 0;
cpu_data->virt_id = cpu_data->cpu_id;
diff --git a/hypervisor/arch/arm/traps.c b/hypervisor/arch/arm/traps.c
index d18794e..8caca1f 100644
--- a/hypervisor/arch/arm/traps.c
+++ b/hypervisor/arch/arm/traps.c
@@ -231,6 +231,30 @@ static int arch_handle_hvc(struct per_cpu *cpu_data, struct trap_context *ctx)
return TRAP_HANDLED;
}

+static int arch_handle_cp15_32(struct per_cpu *cpu_data, struct trap_context *ctx)
+{
+ u32 opc2 = ctx->esr >> 17 & 0x7;
+ u32 opc1 = ctx->esr >> 14 & 0x7;
+ u32 crn = ctx->esr >> 10 & 0xf;
+ u32 rt = ctx->esr >> 5 & 0xf;
+ u32 crm = ctx->esr >> 1 & 0xf;
+ u32 read = ctx->esr & 1;
+
+ if (opc1 == 0 && crn == 1 && crm == 0 && opc2 == 1) {
+ /* Do not let the guest disable coherency by writing ACTLR... */
+ if (read) {
+ unsigned long val;
+ arm_read_sysreg(ACTLR_EL1, val);
+ access_cell_reg(ctx, rt, &val, false);
+ }
+ arch_skip_instruction(ctx);
+
+ return TRAP_HANDLED;
+ }
+
+ return TRAP_UNHANDLED;
+}
+
static int arch_handle_cp15_64(struct per_cpu *cpu_data, struct trap_context *ctx)
{
unsigned long rt_val, rt2_val;
@@ -263,6 +287,7 @@ static int arch_handle_cp15_64(struct per_cpu *cpu_data, struct trap_context *ct

static const trap_handler trap_handlers[38] =
{
+ [ESR_EC_CP15_32] = arch_handle_cp15_32,
[ESR_EC_CP15_64] = arch_handle_cp15_64,
[ESR_EC_HVC] = arch_handle_hvc,
[ESR_EC_SMC] = arch_handle_smc,
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:32 UTC
Permalink
GICv2 is limited to 8 CPUs and uses independent routing bits, whereas
GICv3 (with ARE enabled) uses the MPIDR encoding (aff3.aff2.aff1.aff0)
for routing SPIs.
Before handling SPIs, the GICv2 backend has to probe its banked view of
the distributor to know which CPU interface it is accessing. After that,
the implementation is roughly the same as for GICv3, but GICD_ITARGETSR
are used instead of IROUTER.
Because the guest isn't supposed to rely on the CPU interface number
being coherent with the CPU logical ID, we don't have to translate it to
a virtual ID before handling route accesses inside SMP cells.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/gic-common.c | 148 ++++++++++++++++++++++++++
hypervisor/arch/arm/gic-v2.c | 13 ++-
hypervisor/arch/arm/include/asm/gic_common.h | 3 +
3 files changed, 157 insertions(+), 7 deletions(-)

diff --git a/hypervisor/arch/arm/gic-common.c b/hypervisor/arch/arm/gic-common.c
index 2cf5b11..673b932 100644
--- a/hypervisor/arch/arm/gic-common.c
+++ b/hypervisor/arch/arm/gic-common.c
@@ -29,6 +29,9 @@ extern unsigned int gicd_size;

static DEFINE_SPINLOCK(dist_lock);

+/* The GIC interface numbering does not necessarily match the logical map */
+u8 target_cpu_map[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+
/*
* Most of the GIC distributor writes only reconfigure the IRQs corresponding to
* the bits of the written value, by using separate `set' and `clear' registers.
@@ -144,6 +147,81 @@ static int handle_irq_route(struct per_cpu *cpu_data,
}
}

+/*
+ * GICv2 uses 8bit values for each IRQ in the ITARGETRs registers
+ */
+static int handle_irq_target(struct per_cpu *cpu_data,
+ struct mmio_access *access,
+ unsigned int reg)
+{
+ /*
+ * ITARGETSR contain one byte per IRQ, so the first one affected by this
+ * access corresponds to the reg index
+ */
+ unsigned int i, cpu;
+ unsigned int spi = reg - 32;
+ unsigned int offset;
+ u32 access_mask = 0;
+ u8 targets;
+
+ /*
+ * Let the guest freely access its SGIs and PPIs, which may be used to
+ * fill its CPU interface map.
+ */
+ if (!is_spi(reg))
+ return TRAP_UNHANDLED;
+
+ /*
+ * The registers are byte-accessible, extend the access to a word if
+ * necessary.
+ */
+ offset = spi % 4;
+ access->val <<= 8 * offset;
+ access->size = 4;
+ spi -= offset;
+
+ for (i = 0; i < 4; i++, spi++) {
+ if (spi_in_cell(cpu_data->cell, spi))
+ access_mask |= 0xff << (8 * i);
+ else
+ continue;
+
+ if (!access->is_write)
+ continue;
+
+ targets = (access->val >> (8 * i)) & 0xff;
+
+ /* Check that the targeted interface belongs to the cell */
+ for (cpu = 0; cpu < 8; cpu++) {
+ if (!(targets & target_cpu_map[cpu]))
+ continue;
+
+ if (per_cpu(cpu)->cell == cpu_data->cell)
+ continue;
+
+ printk("Attempt to route SPI%d outside of cell\n", spi);
+ return TRAP_FORBIDDEN;
+ }
+ }
+
+ if (access->is_write) {
+ spin_lock(&dist_lock);
+ u32 itargetsr = readl_relaxed(gicd_base + GICD_ITARGETSR + reg
+ + offset);
+ access->val &= access_mask;
+ /* Combine with external SPIs */
+ access->val |= (itargetsr & ~access_mask);
+ /* And do the access */
+ arch_mmio_access(access);
+ spin_unlock(&dist_lock);
+ } else {
+ arch_mmio_access(access);
+ access->val &= access_mask;
+ }
+
+ return TRAP_HANDLED;
+}
+
static int handle_sgir_access(struct per_cpu *cpu_data,
struct mmio_access *access)
{
@@ -163,6 +241,28 @@ static int handle_sgir_access(struct per_cpu *cpu_data,
return gic_handle_sgir_write(cpu_data, &sgi, false);
}

+/*
+ * Get the CPU interface ID for this cpu. It can be discovered by reading
+ * the banked value of the PPI and IPI TARGET registers
+ * Patch 2bb3135 in Linux explains why the probe may need to scans the first 8
+ * registers: some early implementation returned 0 for the first TARGETS
+ * registributor.
+ * Since those didn't have virtualization extensions, we can safely ignore that
+ * case.
+ */
+int gic_probe_cpu_id(unsigned int cpu)
+{
+ if (cpu > 8)
+ return -EINVAL;
+
+ target_cpu_map[cpu] = readl_relaxed(gicd_base + GICD_ITARGETSR);
+
+ if (target_cpu_map[cpu] == 0)
+ return -ENODEV;
+
+ return 0;
+}
+
int gic_handle_sgir_write(struct per_cpu *cpu_data, struct sgi *sgi,
bool virt_input)
{
@@ -177,8 +277,15 @@ int gic_handle_sgir_write(struct per_cpu *cpu_data, struct sgi *sgi,

/* Filter the targets */
for_each_cpu_except(cpu, cell->cpu_set, this_cpu) {
+ /*
+ * When using a cpu map to target the different CPUs (GICv2),
+ * they are independent from the physical CPU IDs, so there is
+ * no need to translate them to the hypervisor's virtual IDs.
+ */
if (virt_input)
is_target = !!test_bit(cpu_phys2virt(cpu), &targets);
+ else
+ is_target = !!(targets & target_cpu_map[cpu]);

if (sgi->routing_mode == 0 && !is_target)
continue;
@@ -206,6 +313,10 @@ int gic_handle_dist_access(struct per_cpu *cpu_data,
(reg - GICD_IROUTER) / 8);
break;

+ case REG_RANGE(GICD_ITARGETSR, 1024, 1):
+ ret = handle_irq_target(cpu_data, access, reg - GICD_ITARGETSR);
+ break;
+
case REG_RANGE(GICD_ICENABLER, 32, 4):
case REG_RANGE(GICD_ISENABLER, 32, 4):
case REG_RANGE(GICD_ICPENDR, 32, 4):
@@ -288,3 +399,40 @@ void gic_handle_irq(struct per_cpu *cpu_data)
irqchip_eoi_irq(irq_id, handled);
}
}
+
+void gic_target_spis(struct cell *config_cell, struct cell *dest_cell)
+{
+ unsigned int i, first_cpu, cpu_itf;
+ unsigned int shift = 0;
+ void *itargetsr = gicd_base + GICD_ITARGETSR;
+ u32 targets;
+ u32 mask = 0;
+ u32 bits = 0;
+
+ /* Always route to the first logical CPU on reset */
+ for_each_cpu(first_cpu, dest_cell->cpu_set)
+ break;
+
+ cpu_itf = target_cpu_map[first_cpu];
+
+ /* ITARGETSR0-7 contain the PPIs and SGIs, and are read-only. */
+ itargetsr += 4 * 8;
+
+ for (i = 0; i < 64; i++, shift = (shift + 8) % 32) {
+ if (spi_in_cell(config_cell, i)) {
+ mask |= (0xff << shift);
+ bits |= (cpu_itf << shift);
+ }
+
+ /* ITARGETRs have 4 IRQ per register */
+ if ((i + 1) % 4 == 0) {
+ targets = readl_relaxed(itargetsr);
+ targets &= ~mask;
+ targets |= bits;
+ writel_relaxed(targets, itargetsr);
+ itargetsr += 4;
+ mask = 0;
+ bits = 0;
+ }
+ }
+}
diff --git a/hypervisor/arch/arm/gic-v2.c b/hypervisor/arch/arm/gic-v2.c
index 71ae1b0..bae281b 100644
--- a/hypervisor/arch/arm/gic-v2.c
+++ b/hypervisor/arch/arm/gic-v2.c
@@ -137,6 +137,9 @@ static int gic_cpu_init(struct per_cpu *cpu_data)
writel_relaxed(vmcr, gich_base + GICH_VMCR);
writel_relaxed(GICH_HCR_EN, gich_base + GICH_HCR);

+ /* Register ourselves into the CPU itf map */
+ gic_probe_cpu_id(cpu_data->cpu_id);
+
return 0;
}

@@ -151,10 +154,6 @@ static void gic_eoi_irq(u32 irq_id, bool deactivate)
writel_relaxed(irq_id, gicc_base + GICC_DIR);
}

-static void gic_route_spis(struct cell *config_cell, struct cell *dest_cell)
-{
-}
-
static void gic_cell_init(struct cell *cell)
{
struct jailhouse_memory gicv_region;
@@ -166,7 +165,7 @@ static void gic_cell_init(struct cell *cell)
* them when unplugging a CPU.
*/
if (cell != &root_cell)
- gic_route_spis(cell, cell);
+ gic_target_spis(cell, cell);

gicv_region.phys_start = (unsigned long)gicv_base;
/*
@@ -189,8 +188,8 @@ static void gic_cell_init(struct cell *cell)

static void gic_cell_exit(struct cell *cell)
{
- /* Reset interrupt routing of the cell's spis*/
- gic_route_spis(cell, &root_cell);
+ /* Reset interrupt routing of the cell's spis */
+ gic_target_spis(cell, &root_cell);
}

static int gic_send_sgi(struct sgi *sgi)
diff --git a/hypervisor/arch/arm/include/asm/gic_common.h b/hypervisor/arch/arm/include/asm/gic_common.h
index 9d87ccb..aa5487c 100644
--- a/hypervisor/arch/arm/include/asm/gic_common.h
+++ b/hypervisor/arch/arm/include/asm/gic_common.h
@@ -42,15 +42,18 @@

#ifndef __ASSEMBLY__

+struct cell;
struct mmio_access;
struct per_cpu;
struct sgi;

+int gic_probe_cpu_id(unsigned int cpu);
int gic_handle_dist_access(struct per_cpu *cpu_data,
struct mmio_access *access);
int gic_handle_sgir_write(struct per_cpu *cpu_data, struct sgi *sgi,
bool virt_input);
void gic_handle_irq(struct per_cpu *cpu_data);
+void gic_target_spis(struct cell *config_cell, struct cell *dest_cell);

#endif /* !__ASSEMBLY__ */
#endif /* !_JAILHOUSE_ASM_GIC_COMMON_H */
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:24 UTC
Permalink
Hotplugging CPUs on ARM is quite difficult, as each platform uses its
own system. The use of PSCI emulation will greatly simplify this, but
on many platforms, we still have to define a series of specific SMP
operations wrapped around the kernel hotplug implementation.

This patch adds support for the vexpress hotplug system:
- When the root cell attempts to unplug a CPU, to give it to a new
cell, it is put in a WFI loop, which is left when Jailhouse sends
a synchronising IPI to all CPUs that need to be parked.
- When re-assigning a CPU to the root cell, the simplest return path
is through the kernel's secondary entry, whose address is stored in
the system flags register.

Because the kernel only writes the flag register once, plugging CPUs in
the host cannot be accomplished by waiting for a trapped MMIO. Moreover,
such a trap would be missed on hypervisor shutdown, since CPU0 may
return to bare EL1 before secondary CPUs. On some platforms, it may be
necessary to park secondary CPUs outside of the hypervisor on shutdown,
by copying a minimal spin code in a reserved location...

This patch also attempts to combine both classical and PSCI boot methods
in SMP guests: secondary CPUs are held in the psci_emulate_spin handler,
and can be woken up by both a PSCI call and a trapped access to the
vexpress mbox.
The same applies for hotplugging secondary CPUs in the guests, but the
mailbox method only waits for an IPI.

PSCI in the host is not currently supported: it would require a call to
the actual CPU_OFF handler when shutting down the whole hypervisor.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/Makefile | 3 +-
hypervisor/arch/arm/control.c | 10 ++--
hypervisor/arch/arm/include/asm/cell.h | 2 +
hypervisor/arch/arm/include/asm/control.h | 2 -
hypervisor/arch/arm/include/asm/platform.h | 2 +-
hypervisor/arch/arm/include/asm/smp.h | 52 +++++++++++++++++
hypervisor/arch/arm/mmio.c | 3 +
hypervisor/arch/arm/setup.c | 9 ++-
hypervisor/arch/arm/smp-vexpress.c | 73 ++++++++++++++++++++++++
hypervisor/arch/arm/smp.c | 84 ++++++++++++++++++++++++++++
hypervisor/arch/arm/spin.c | 59 -------------------
11 files changed, 227 insertions(+), 72 deletions(-)
create mode 100644 hypervisor/arch/arm/include/asm/smp.h
create mode 100644 hypervisor/arch/arm/smp-vexpress.c
create mode 100644 hypervisor/arch/arm/smp.c
delete mode 100644 hypervisor/arch/arm/spin.c

diff --git a/hypervisor/arch/arm/Makefile b/hypervisor/arch/arm/Makefile
index 472e224..7beb612 100644
--- a/hypervisor/arch/arm/Makefile
+++ b/hypervisor/arch/arm/Makefile
@@ -17,10 +17,11 @@ always := built-in.o
obj-y := entry.o dbg-write.o exception.o setup.o control.o lib.o
obj-y += traps.o mmio.o
obj-y += paging.o mmu_hyp.o mmu_cell.o caches.o
-obj-y += psci.o psci_low.o spin.o
+obj-y += psci.o psci_low.o smp.o
obj-y += irqchip.o gic-common.o
obj-$(CONFIG_ARM_GIC_V3) += gic-v3.o
obj-$(CONFIG_ARCH_VEXPRESS) += dbg-write-pl011.o
+obj-$(CONFIG_ARCH_VEXPRESS) += smp-vexpress.o

# Needed for kconfig
ccflags-y += -I$(KERNELDIR)/include
diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index f223ae8..13d48ce 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -117,11 +117,11 @@ void arch_reset_self(struct per_cpu *cpu_data)
if (err)
printk("IRQ setup failed\n");

- if (cpu_data->cell == &root_cell)
- /* Wait for the driver to call cpu_up */
- reset_address = arch_cpu_spin();
+ /* Wait for the driver to call cpu_up */
+ if (cell == &root_cell)
+ reset_address = arch_smp_spin(cpu_data, root_cell.arch.smp);
else
- reset_address = 0;
+ reset_address = arch_smp_spin(cpu_data, cell->arch.smp);

/* Set the new MPIDR */
arm_write_sysreg(VMPIDR_EL2, cpu_data->virt_id | MPIDR_MP_BIT);
@@ -290,6 +290,8 @@ int arch_cell_create(struct per_cpu *cpu_data, struct cell *cell)
irqchip_cell_init(cell);
irqchip_root_cell_shrink(cell);

+ register_smp_ops(cell);
+
return 0;
}

diff --git a/hypervisor/arch/arm/include/asm/cell.h b/hypervisor/arch/arm/include/asm/cell.h
index 4dadfaf..1762772 100644
--- a/hypervisor/arch/arm/include/asm/cell.h
+++ b/hypervisor/arch/arm/include/asm/cell.h
@@ -13,6 +13,7 @@
#ifndef _JAILHOUSE_ASM_CELL_H
#define _JAILHOUSE_ASM_CELL_H

+#include <asm/smp.h>
#include <asm/spinlock.h>
#include <asm/types.h>

@@ -24,6 +25,7 @@

struct arch_cell {
struct paging_structures mm;
+ struct smp_ops *smp;

spinlock_t caches_lock;
bool needs_flush;
diff --git a/hypervisor/arch/arm/include/asm/control.h b/hypervisor/arch/arm/include/asm/control.h
index 10a46c2..f1842ff 100644
--- a/hypervisor/arch/arm/include/asm/control.h
+++ b/hypervisor/arch/arm/include/asm/control.h
@@ -33,8 +33,6 @@ void arch_mmu_cell_destroy(struct cell *cell);
int arch_mmu_cpu_cell_init(struct per_cpu *cpu_data);
void arch_handle_sgi(struct per_cpu *cpu_data, u32 irqn);
void arch_handle_trap(struct per_cpu *cpu_data, struct registers *guest_regs);
-int arch_spin_init(void);
-unsigned long arch_cpu_spin(void);
struct registers* arch_handle_exit(struct per_cpu *cpu_data,
struct registers *regs);
void arch_reset_self(struct per_cpu *cpu_data);
diff --git a/hypervisor/arch/arm/include/asm/platform.h b/hypervisor/arch/arm/include/asm/platform.h
index d748ba3..df8575a 100644
--- a/hypervisor/arch/arm/include/asm/platform.h
+++ b/hypervisor/arch/arm/include/asm/platform.h
@@ -37,7 +37,7 @@
# endif /* GIC */

# define MAINTENANCE_IRQ 25
-# define HOTPLUG_MBOX ((void *)0x1c010030)
+# define SYSREGS_BASE 0x1c010000

#endif /* CONFIG_ARCH_VEXPRESS */

diff --git a/hypervisor/arch/arm/include/asm/smp.h b/hypervisor/arch/arm/include/asm/smp.h
new file mode 100644
index 0000000..858b875
--- /dev/null
+++ b/hypervisor/arch/arm/include/asm/smp.h
@@ -0,0 +1,52 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#ifndef JAILHOUSE_ASM_SMP_H_
+#define JAILHOUSE_ASM_SMP_H_
+
+#ifndef __ASSEMBLY__
+
+enum smp_type {
+ SMP_PSCI,
+ SMP_SPIN
+};
+
+struct mmio_access;
+struct per_cpu;
+struct cell;
+
+struct smp_ops {
+ enum smp_type type;
+ int (*init)(struct cell *cell);
+
+ /*
+ * Uses the MMIO trap interface:
+ * returns TRAP_HANDLED when the mailbox is targeted, or else
+ * TRAP_UNHANDLED.
+ */
+ int (*mmio_handler)(struct per_cpu *cpu_data,
+ struct mmio_access *access);
+ /* Returns an address */
+ unsigned long (*cpu_spin)(struct per_cpu *cpu_data);
+};
+
+int arch_generic_smp_init(unsigned long mbox);
+int arch_generic_smp_mmio(struct per_cpu *cpu_data, struct mmio_access *access,
+ unsigned long mbox);
+unsigned long arch_generic_smp_spin(unsigned long mbox);
+
+int arch_smp_mmio_access(struct per_cpu *cpu_data, struct mmio_access *access);
+unsigned long arch_smp_spin(struct per_cpu *cpu_data, struct smp_ops *ops);
+void register_smp_ops(struct cell *cell);
+
+#endif /* !__ASSEMBLY__ */
+#endif /* !JAILHOUSE_ASM_SMP_H_ */
diff --git a/hypervisor/arch/arm/mmio.c b/hypervisor/arch/arm/mmio.c
index c27005f..bc283a7 100644
--- a/hypervisor/arch/arm/mmio.c
+++ b/hypervisor/arch/arm/mmio.c
@@ -13,6 +13,7 @@
#include <asm/io.h>
#include <asm/irqchip.h>
#include <asm/processor.h>
+#include <asm/smp.h>
#include <asm/traps.h>

/* Taken from the ARM ARM pseudocode for taking a data abort */
@@ -139,6 +140,8 @@ int arch_handle_dabt(struct per_cpu *cpu_data, struct trap_context *ctx)
access.size = size;

ret = irqchip_mmio_access(cpu_data, &access);
+ if (ret == TRAP_UNHANDLED)
+ ret = arch_smp_mmio_access(cpu_data, &access);

if (ret == TRAP_HANDLED) {
/* Put the read value into the dest register */
diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index d2b6ff0..8a757c0 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -86,10 +86,6 @@ int arch_cpu_init(struct per_cpu *cpu_data)
/* Setup guest traps */
arm_write_sysreg(HCR, hcr);

- err = arch_spin_init();
- if (err)
- return err;
-
err = arch_mmu_cpu_cell_init(cpu_data);
if (err)
return err;
@@ -108,7 +104,10 @@ int arch_init_late(void)
/* Setup the SPI bitmap */
irqchip_cell_init(&root_cell);

- return 0;
+ /* Platform-specific SMP operations */
+ register_smp_ops(&root_cell);
+
+ return root_cell.arch.smp->init(&root_cell);
}

void arch_cpu_activate_vmm(struct per_cpu *cpu_data)
diff --git a/hypervisor/arch/arm/smp-vexpress.c b/hypervisor/arch/arm/smp-vexpress.c
new file mode 100644
index 0000000..00d5c3b
--- /dev/null
+++ b/hypervisor/arch/arm/smp-vexpress.c
@@ -0,0 +1,73 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/control.h>
+#include <asm/io.h>
+#include <asm/irqchip.h>
+#include <asm/paging.h>
+#include <asm/platform.h>
+#include <asm/smp.h>
+#include <jailhouse/processor.h>
+
+static unsigned long hotplug_mbox;
+
+static int smp_init(struct cell *cell)
+{
+ /* vexpress SYSFLAGS */
+ hotplug_mbox = SYSREGS_BASE + 0x30;
+
+ /* Map the mailbox page */
+ arch_generic_smp_init(hotplug_mbox);
+
+ return 0;
+}
+
+static unsigned long smp_spin(struct per_cpu *cpu_data)
+{
+ return arch_generic_smp_spin(hotplug_mbox);
+}
+
+static int smp_mmio(struct per_cpu *cpu_data, struct mmio_access *access)
+{
+ return arch_generic_smp_mmio(cpu_data, access, hotplug_mbox);
+}
+
+static struct smp_ops vexpress_smp_ops = {
+ .type = SMP_SPIN,
+ .init = smp_init,
+ .mmio_handler = smp_mmio,
+ .cpu_spin = smp_spin,
+};
+
+/*
+ * Store the guest's secondaries into our PSCI, and wake them up when we catch
+ * an access to the mbox from the primary.
+ */
+static struct smp_ops vexpress_guest_smp_ops = {
+ .type = SMP_SPIN,
+ .init = psci_cell_init,
+ .mmio_handler = smp_mmio,
+ .cpu_spin = psci_emulate_spin,
+};
+
+void register_smp_ops(struct cell *cell)
+{
+ /*
+ * mach-vexpress only writes the SYS_FLAGS once at boot, so the root
+ * cell cannot rely on this write to guess where the secondary CPUs
+ * should return.
+ */
+ if (cell == &root_cell)
+ cell->arch.smp = &vexpress_smp_ops;
+ else
+ cell->arch.smp = &vexpress_guest_smp_ops;
+}
diff --git a/hypervisor/arch/arm/smp.c b/hypervisor/arch/arm/smp.c
new file mode 100644
index 0000000..973da6f
--- /dev/null
+++ b/hypervisor/arch/arm/smp.c
@@ -0,0 +1,84 @@
+/*
+ * Jailhouse, a Linux-based partitioning hypervisor
+ *
+ * Copyright (c) ARM Limited, 2014
+ *
+ * Authors:
+ * Jean-Philippe Brucker <jean-***@arm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <asm/control.h>
+#include <asm/io.h>
+#include <asm/psci.h>
+#include <asm/setup.h>
+#include <asm/smp.h>
+#include <asm/traps.h>
+#include <jailhouse/printk.h>
+
+int arch_generic_smp_init(unsigned long mbox)
+{
+ void *mbox_page = (void *)(mbox & PAGE_MASK);
+ int err = arch_map_device(mbox_page, mbox_page, PAGE_SIZE);
+
+ if (err)
+ printk("Unable to map spin mbox page\n");
+
+ return err;
+}
+
+unsigned long arch_generic_smp_spin(unsigned long mbox)
+{
+ /*
+ * This is super-dodgy: we assume nothing wrote to the flag register
+ * since the kernel called smp_prepare_cpus, at initialisation.
+ */
+ return readl_relaxed((void *)mbox);
+}
+
+int arch_generic_smp_mmio(struct per_cpu *cpu_data, struct mmio_access *access,
+ unsigned long mbox)
+{
+ unsigned int cpu;
+ unsigned long mbox_page = mbox & PAGE_MASK;
+
+ if (access->addr < mbox_page || access->addr >= mbox_page + PAGE_SIZE)
+ return TRAP_UNHANDLED;
+
+ if (access->addr != mbox || !access->is_write)
+ /* Ignore all other accesses */
+ return TRAP_HANDLED;
+
+ for_each_cpu_except(cpu, cpu_data->cell->cpu_set, cpu_data->cpu_id) {
+ per_cpu(cpu)->guest_mbox.entry = access->val;
+ psci_try_resume(cpu);
+ }
+
+ return TRAP_HANDLED;
+}
+
+unsigned long arch_smp_spin(struct per_cpu *cpu_data, struct smp_ops *ops)
+{
+ /*
+ * Hotplugging CPU0 is not currently supported. It is always assumed to
+ * be the primary CPU. This is consistent with the linux behavior on
+ * most platforms.
+ * The guest image always starts at virtual address 0.
+ */
+ if (cpu_data->virt_id == 0)
+ return 0;
+
+ return ops->cpu_spin(cpu_data);
+}
+
+int arch_smp_mmio_access(struct per_cpu *cpu_data, struct mmio_access *access)
+{
+ struct smp_ops *smp_ops = cpu_data->cell->arch.smp;
+
+ if (smp_ops->mmio_handler)
+ return smp_ops->mmio_handler(cpu_data, access);
+
+ return TRAP_UNHANDLED;
+}
diff --git a/hypervisor/arch/arm/spin.c b/hypervisor/arch/arm/spin.c
deleted file mode 100644
index 07ba22d..0000000
--- a/hypervisor/arch/arm/spin.c
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- * Jailhouse, a Linux-based partitioning hypervisor
- *
- * Copyright (c) ARM Limited, 2014
- *
- * Authors:
- * Jean-Philippe Brucker <jean-***@arm.com>
- *
- * This work is licensed under the terms of the GNU GPL, version 2. See
- * the COPYING file in the top-level directory.
- */
-
-#include <asm/io.h>
-#include <asm/platform.h>
-#include <asm/setup.h>
-#include <asm/control.h>
-#include <jailhouse/printk.h>
-
-#if HOTPLUG_SPIN == 1
-int arch_spin_init(void)
-{
- unsigned long mbox = (unsigned long)HOTPLUG_MBOX;
- void *mbox_page = (void *)(mbox & PAGE_MASK);
- int err = arch_map_device(mbox_page, mbox_page, PAGE_SIZE);
-
- if (err)
- printk("Unable to map spin mbox page\n");
-
- return err;
-}
-
-
-unsigned long arch_cpu_spin(void)
-{
- u32 address;
-
- /*
- * This is super-dodgy: we assume nothing wrote to the flag register
- * since the kernel called smp_prepare_cpus, at initialisation.
- */
- do {
- wfe();
- address = readl_relaxed((void *)HOTPLUG_MBOX);
- cpu_relax();
- } while (address == 0);
-
- return address;
-}
-
-#elif HOTPLUG_PSCI == 1
-int arch_spin_init(void)
-{
-}
-
-unsigned long arch_cpu_spin(void)
-{
- /* FIXME: wait for a PSCI hvc */
-}
-#endif
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 12:03:21 UTC
Permalink
This patch enables the routing of SPIs to the new cell's first CPU. When
destroyed, all SPIs are re-routed to the root cell.
An exhaustive implementation would save the targets of each IRQ before
transferring it to a new cell. Since linux does not currently route SPIs
to secondary CPUs and the root cell is not supposed to use devices that
will be assigned to guests anyway, it should be safe to route everything
to CPU0.

This patch follows the core configuration and the IOAPIC implementation,
which only allows to use the first 64 SPIs.
A future patch will need to change this minimal bitmap size to 988.

Signed-off-by: Jean-Philippe Brucker <jean-***@arm.com>
---
hypervisor/arch/arm/control.c | 5 ++++
hypervisor/arch/arm/gic-v3.c | 32 +++++++++++++++++++++++++
hypervisor/arch/arm/include/asm/cell.h | 2 ++
hypervisor/arch/arm/include/asm/irqchip.h | 6 +++++
hypervisor/arch/arm/irqchip.c | 37 +++++++++++++++++++++++++++++
hypervisor/arch/arm/setup.c | 3 +++
6 files changed, 85 insertions(+)

diff --git a/hypervisor/arch/arm/control.c b/hypervisor/arch/arm/control.c
index 0bfcda6..0fcdba1 100644
--- a/hypervisor/arch/arm/control.c
+++ b/hypervisor/arch/arm/control.c
@@ -287,6 +287,9 @@ int arch_cell_create(struct per_cpu *cpu_data, struct cell *cell)
}
cell->arch.last_virt_id = virt_id - 1;

+ irqchip_cell_init(cell);
+ irqchip_root_cell_shrink(cell);
+
return 0;
}

@@ -303,6 +306,8 @@ void arch_cell_destroy(struct per_cpu *cpu_data, struct cell *cell)
percpu->virt_id = percpu->cpu_id;
arch_reset_cpu(cpu);
}
+
+ irqchip_cell_exit(cell);
}

void arch_config_commit(struct per_cpu *cpu_data,
diff --git a/hypervisor/arch/arm/gic-v3.c b/hypervisor/arch/arm/gic-v3.c
index ddd5d4e..f6b940c 100644
--- a/hypervisor/arch/arm/gic-v3.c
+++ b/hypervisor/arch/arm/gic-v3.c
@@ -33,6 +33,8 @@ static unsigned int gic_num_lr;
static unsigned int gic_num_priority_bits;
static u32 gic_version;

+extern void *gicd_base;
+extern unsigned int gicd_size;
static void *gicr_base;
static unsigned int gicr_size;

@@ -161,6 +163,34 @@ static int gic_cpu_init(struct per_cpu *cpu_data)
return 0;
}

+static void gic_route_spis(struct cell *config_cell, struct cell *dest_cell)
+{
+ int i;
+ u64 spis = config_cell->arch.spis;
+ void *irouter = gicd_base + GICD_IROUTER;
+ unsigned int first_cpu;
+
+ /* Use the core functions to retrieve the first physical id */
+ for_each_cpu(first_cpu, dest_cell->cpu_set)
+ break;
+
+ for (i = 0; i < 64; i++, irouter += 8) {
+ if (test_bit(i, (unsigned long *)&spis))
+ writeq_relaxed(first_cpu, irouter);
+ }
+}
+
+static void gic_cell_init(struct cell *cell)
+{
+ gic_route_spis(cell, cell);
+}
+
+static void gic_cell_exit(struct cell *cell)
+{
+ /* Reset interrupt routing of the cell's spis*/
+ gic_route_spis(cell, &root_cell);
+}
+
static int gic_send_sgi(struct sgi *sgi)
{
u64 val;
@@ -416,6 +446,8 @@ struct irqchip_ops gic_irqchip = {
.init = gic_init,
.cpu_init = gic_cpu_init,
.cpu_reset = gic_cpu_reset,
+ .cell_init = gic_cell_init,
+ .cell_exit = gic_cell_exit,
.send_sgi = gic_send_sgi,
.handle_irq = gic_handle_irq,
.inject_irq = gic_inject_irq,
diff --git a/hypervisor/arch/arm/include/asm/cell.h b/hypervisor/arch/arm/include/asm/cell.h
index 6bc6903..4dadfaf 100644
--- a/hypervisor/arch/arm/include/asm/cell.h
+++ b/hypervisor/arch/arm/include/asm/cell.h
@@ -28,6 +28,8 @@ struct arch_cell {
spinlock_t caches_lock;
bool needs_flush;

+ u64 spis;
+
unsigned int last_virt_id;
};

diff --git a/hypervisor/arch/arm/include/asm/irqchip.h b/hypervisor/arch/arm/include/asm/irqchip.h
index a4e625d..c2c34b7 100644
--- a/hypervisor/arch/arm/include/asm/irqchip.h
+++ b/hypervisor/arch/arm/include/asm/irqchip.h
@@ -46,6 +46,8 @@ struct sgi {
struct irqchip_ops {
int (*init)(void);
int (*cpu_init)(struct per_cpu *cpu_data);
+ void (*cell_init)(struct cell *cell);
+ void (*cell_exit)(struct cell *cell);
int (*cpu_reset)(struct per_cpu *cpu_data);

int (*send_sgi)(struct sgi *sgi);
@@ -81,6 +83,10 @@ int irqchip_init(void);
int irqchip_cpu_init(struct per_cpu *cpu_data);
int irqchip_cpu_reset(struct per_cpu *cpu_data);

+void irqchip_cell_init(struct cell *cell);
+void irqchip_cell_exit(struct cell *cell);
+void irqchip_root_cell_shrink(struct cell *cell);
+
int irqchip_send_sgi(struct sgi *sgi);
void irqchip_handle_irq(struct per_cpu *cpu_data);
void irqchip_eoi_irq(u32 irqn, bool deactivate);
diff --git a/hypervisor/arch/arm/irqchip.c b/hypervisor/arch/arm/irqchip.c
index 356f3be..c03b660 100644
--- a/hypervisor/arch/arm/irqchip.c
+++ b/hypervisor/arch/arm/irqchip.c
@@ -231,6 +231,43 @@ int irqchip_mmio_access(struct per_cpu *cpu_data, struct mmio_access *access)
return TRAP_UNHANDLED;
}

+static const struct jailhouse_irqchip *
+irqchip_find_config(struct jailhouse_cell_desc *config)
+{
+ const struct jailhouse_irqchip *irq_config =
+ jailhouse_cell_irqchips(config);
+
+ if (config->num_irqchips)
+ return irq_config;
+ else
+ return NULL;
+}
+
+void irqchip_cell_init(struct cell *cell)
+{
+ const struct jailhouse_irqchip *pins = irqchip_find_config(cell->config);
+
+ cell->arch.spis = (pins ? pins->pin_bitmap : 0);
+
+ irqchip.cell_init(cell);
+}
+
+void irqchip_cell_exit(struct cell *cell)
+{
+ const struct jailhouse_irqchip *root_pins =
+ irqchip_find_config(root_cell.config);
+
+ if (root_pins)
+ root_cell.arch.spis |= cell->arch.spis & root_pins->pin_bitmap;
+
+ irqchip.cell_exit(cell);
+}
+
+void irqchip_root_cell_shrink(struct cell *cell)
+{
+ root_cell.arch.spis &= ~(cell->arch.spis);
+}
+
/* Only the GIC is implemented */
extern struct irqchip_ops gic_irqchip;

diff --git a/hypervisor/arch/arm/setup.c b/hypervisor/arch/arm/setup.c
index b0209a2..5593c78 100644
--- a/hypervisor/arch/arm/setup.c
+++ b/hypervisor/arch/arm/setup.c
@@ -104,6 +104,9 @@ int arch_cpu_init(struct per_cpu *cpu_data)

int arch_init_late(void)
{
+ /* Setup the SPI bitmap */
+ irqchip_cell_init(&root_cell);
+
return 0;
}
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-08-08 12:45:46 UTC
Permalink
Hi Jean-Philippe,
Post by Jean-Philippe Brucker
Hello,
This patch series is a proposal of the initial 32bit ARM support for
Jailhouse.
I based this port on the Versatile Express platform, allowing it to run
on ARM's system models. Since there is as many different memory maps in
the ARM ecosystem as implementations, some discussions will be needed to
add support for device trees before adding new platform.
For the moment, I did not add any major change to the core or the driver.
I also tested it on an Odroid-XU, but I am not comfortable adding it to
this series, since I used the non-mainline hardkernel tree with some
patches of my own to fix virtualisation support.
This series is NOT an official support from ARM ltd., but the result of
my summer placement, which ends this week. I will continue discussing and
working on it on my own time, using my home address.
First of all, thanks a lot for your contribution and, of course, many
thanks to ARM for making this possible!

Having a running Odroid-XU in reach, I would be very curious now to give
all this a try. Do you happen to have a step-by-step instruction for
that target at hand? Do you have a config, or how do I generate it?

Will look in technical details soon.

Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-08-08 12:48:22 UTC
Permalink
Post by Jan Kiszka
Hi Jean-Philippe,
Post by Jean-Philippe Brucker
Hello,
This patch series is a proposal of the initial 32bit ARM support for
Jailhouse.
I based this port on the Versatile Express platform, allowing it to run
on ARM's system models. Since there is as many different memory maps in
the ARM ecosystem as implementations, some discussions will be needed to
add support for device trees before adding new platform.
For the moment, I did not add any major change to the core or the driver.
I also tested it on an Odroid-XU, but I am not comfortable adding it to
this series, since I used the non-mainline hardkernel tree with some
patches of my own to fix virtualisation support.
This series is NOT an official support from ARM ltd., but the result of
my summer placement, which ends this week. I will continue discussing and
working on it on my own time, using my home address.
First of all, thanks a lot for your contribution and, of course, many
thanks to ARM for making this possible!
Having a running Odroid-XU in reach, I would be very curious now to give
all this a try. Do you happen to have a step-by-step instruction for
that target at hand? Do you have a config, or how do I generate it?
Will look in technical details soon.
Do you happen have all this in some public git repo by chance?

Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-08-08 12:57:43 UTC
Permalink
Post by Jan Kiszka
Hi Jean-Philippe,
Post by Jean-Philippe Brucker
Hello,
This patch series is a proposal of the initial 32bit ARM support for
Jailhouse.
I based this port on the Versatile Express platform, allowing it to run
on ARM's system models. Since there is as many different memory maps in
the ARM ecosystem as implementations, some discussions will be needed to
add support for device trees before adding new platform.
For the moment, I did not add any major change to the core or the driver.
I also tested it on an Odroid-XU, but I am not comfortable adding it to
this series, since I used the non-mainline hardkernel tree with some
patches of my own to fix virtualisation support.
This series is NOT an official support from ARM ltd., but the result of
my summer placement, which ends this week. I will continue discussing and
working on it on my own time, using my home address.
First of all, thanks a lot for your contribution and, of course, many
thanks to ARM for making this possible!
Having a running Odroid-XU in reach, I would be very curious now to give
all this a try. Do you happen to have a step-by-step instruction for
that target at hand? Do you have a config, or how do I generate it?
Now I actually read why there are now Odroid traces yet. :)

Nevertheless, I think it would be good to have your bits available
somewhere, even when they have further, rather rough dependencies. I
don't have a system model at hand. Or does QEMU/KVM work as target platform?

Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 19:11:28 UTC
Permalink
Post by Jan Kiszka
Post by Jan Kiszka
Hi Jean-Philippe,
Post by Jean-Philippe Brucker
Hello,
This patch series is a proposal of the initial 32bit ARM support for
Jailhouse.
I based this port on the Versatile Express platform, allowing it to run
on ARM's system models. Since there is as many different memory maps in
the ARM ecosystem as implementations, some discussions will be needed to
add support for device trees before adding new platform.
For the moment, I did not add any major change to the core or the driver.
I also tested it on an Odroid-XU, but I am not comfortable adding it to
this series, since I used the non-mainline hardkernel tree with some
patches of my own to fix virtualisation support.
This series is NOT an official support from ARM ltd., but the result of
my summer placement, which ends this week. I will continue discussing and
working on it on my own time, using my home address.
First of all, thanks a lot for your contribution and, of course, many
thanks to ARM for making this possible!
Having a running Odroid-XU in reach, I would be very curious now to give
all this a try. Do you happen to have a step-by-step instruction for
that target at hand? Do you have a config, or how do I generate it?
Now I actually read why there are now Odroid traces yet. :)
Nevertheless, I think it would be good to have your bits available
somewhere, even when they have further, rather rough dependencies. I
don't have a system model at hand. Or does QEMU/KVM work as target platform?
QEMU/KVM won't work because ARM doesn't support nested virtualisation.
You will either need a complete emulation or a hardware platform.
I won't have access to an odroid-xu or a model either during the next few
months, so I will try to obtain something on my samsung chromebook
instead.

I added a public repository that contains this patch series plus inmate
demos and configs, here:
https://github.com/jpbrucker/jailhouse
You can check the branch arm-exynos, that contains crude support for the
odroid-XU.


Here are the steps I followed to run Jailhouse on the XU. I only ran a
simple guest that issued a few hypervisor calls, but didn't use any
device yet. Moreover, I didn't find any public documentation about the
platform.

* I used this thread to get the files needed for booting linux with
virtualisation extensions enabled:
http://forum.odroid.com/viewtopic.php?f=64&t=2778&start=60
* You'll need to flash your sdcard with the signed files, and a modified
u-boot available here: https://github.com/FiachAntaw/u-boot
You may need to set the hardware floats in arch/arm/cpu/armv7/config.mk:
PLATFORM_RELFLAGS += -fno-common -ffixed-r8 -mfloat-abi=hard -mfpu=vfpv3

* The kernel used is on the branch odroid-3.14.y-linaro from
https://github.com/hardkernel/linux/tree/odroid-3.14.y-linaro
with at least the patch included in this mail. Concatenate the zImage
with the generated arch/arm/boot/dts/exynos5410-odroidxu.dtb, put it in
a boot partition on the sdcard along with a boot.ini. I think I took
mine from the archlinx-arm image here and added the memreserve:
http://archlinuxarm.org/platforms/armv7/samsung/odroid-xu
I also had to remove the SCU accesses that conflicted with the default
jailhouse virtual address and disable CPU1 shutdown in
arch/arm/mach-exynos/hotplug.c. As I said, platform-specific support
will need lots of consolidation.

And the patch that allows to destroy cells in odroid-3.14.y-linaro:

-- >8 --
Subject: ARM: EXYNOS: Fix hotplugging under an hypervisor

A CPU started in HYP mode may be plugged back in the kernel in SVC mode
after an hypervisor has been installed.
Currently, the secondary startup on exynos compares the CPU mode against
the boot mode, the HYP mask, and then assumes that it is booted in secure
mode.

This patch ensures to only initialise the secure mode when CPSR
reports it.
---
arch/arm/include/uapi/asm/ptrace.h | 1 +
arch/arm/mach-exynos/headsmp.S | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/uapi/asm/ptrace.h b/arch/arm/include/uapi/asm/ptrace.h
index 5af0ed1..70ff6bf 100644
--- a/arch/arm/include/uapi/asm/ptrace.h
+++ b/arch/arm/include/uapi/asm/ptrace.h
@@ -53,6 +53,7 @@
#endif
#define FIQ_MODE 0x00000011
#define IRQ_MODE 0x00000012
+#define MON_MODE 0x00000016
#define ABT_MODE 0x00000017
#define HYP_MODE 0x0000001a
#define UND_MODE 0x0000001b
diff --git a/arch/arm/mach-exynos/headsmp.S b/arch/arm/mach-exynos/headsmp.S
index 08d3d3f..1d5359c 100644
--- a/arch/arm/mach-exynos/headsmp.S
+++ b/arch/arm/mach-exynos/headsmp.S
@@ -134,7 +134,7 @@ pen: ldr r7, [r6]
ands r0, r0, #MODE_MASK
subs r1, r0, r2
beq 3f
- subs r2, r2, #HYP_MODE
+ subs r2, r2, #MON_MODE
bne 3f

/* Setting NSACR to allow coprocessor access from non-secure mode */
--
1.7.9.5
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Santosh Shukla
2014-09-24 08:45:11 UTC
Permalink
Hi Jean,

Wanted to try out your arm port so few basic/trivial question inline. Thanks.
Post by Jean-Philippe Brucker
QEMU/KVM won't work because ARM doesn't support nested virtualisation.
You will either need a complete emulation or a hardware platform.
I won't have access to an odroid-xu or a model either during the next few
Is this port working on foundation v7 model? Or whole port targeted
for Odroid XU. I am planning to use port for my BE work fist on
foundation model then on few other board like Arndale, Odroid XU3.
Post by Jean-Philippe Brucker
months, so I will try to obtain something on my samsung chromebook
instead.
I added a public repository that contains this patch series plus inmate
https://github.com/jpbrucker/jailhouse
You can check the branch arm-exynos, that contains crude support for the
odroid-XU.
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-09-24 10:44:57 UTC
Permalink
Post by Santosh Shukla
Hi Jean,
Wanted to try out your arm port so few basic/trivial question inline. Thanks.
Post by Jean-Philippe Brucker
QEMU/KVM won't work because ARM doesn't support nested virtualisation.
You will either need a complete emulation or a hardware platform.
I won't have access to an odroid-xu or a model either during the next few
Is this port working on foundation v7 model? Or whole port targeted
for Odroid XU. I am planning to use port for my BE work fist on
foundation model then on few other board like Arndale, Odroid XU3.
We are also looking for an alternative to those Odroids. Did anyone
already worked with some of TI's Keystone II or even their eval board
K2E? I'm particularly interested in the kernel quality and upstream
integration (after the Odroid disaster).

Regarding the ARM tree: I'm considering to rebase the patches over
current devel head and push them more prominently into the main github
repository, at least in a side branch. For that I need signed-offs by
you, Jean-Philippe, for some commits in your arm-exynos branch. We need
them for the physical test platform, so I'd like to include them even if
they are more rough than the rest of the series.

Thanks,
Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Santosh Shukla
2014-09-24 11:08:53 UTC
Permalink
Post by Jan Kiszka
Post by Santosh Shukla
Hi Jean,
Wanted to try out your arm port so few basic/trivial question inline. Thanks.
Post by Jean-Philippe Brucker
QEMU/KVM won't work because ARM doesn't support nested virtualisation.
You will either need a complete emulation or a hardware platform.
I won't have access to an odroid-xu or a model either during the next few
Is this port working on foundation v7 model? Or whole port targeted
for Odroid XU. I am planning to use port for my BE work fist on
foundation model then on few other board like Arndale, Odroid XU3.
We are also looking for an alternative to those Odroids. Did anyone
already worked with some of TI's Keystone II or even their eval board
K2E? I'm particularly interested in the kernel quality and upstream
integration (after the Odroid disaster).
afaict about KS2, they don't have kvm port (I may be wrong bu tlast i
checked with TI folks) up-streamed. Most feasible thing to get base
port of Jailhouse on Versatile or vexpress ARM reference board Or to
have foundational model port, In fact foundation model port would be
very handy and convenient to try out new feature addition, bug fixes
etc.. to an extent. Arndale is another option with rich community kvm
support so I am thinking to work on first foundation model, second
port for Arndale/ ChromeBook.
Post by Jan Kiszka
Regarding the ARM tree: I'm considering to rebase the patches over
current devel head and push them more prominently into the main github
repository, at least in a side branch. For that I need signed-offs by
you, Jean-Philippe, for some commits in your arm-exynos branch. We need
them for the physical test platform, so I'd like to include them even if
they are more rough than the rest of the series.
Okay, I am process to digest Jean port therefore paying more attention
beside looking at setup issue, I'll soon send my feedback
(testing/review/comment etc.) on patch may be early next week. However
keeping one dev/experimental branch would definitely help.
Post by Jan Kiszka
Thanks,
Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-09-24 11:57:55 UTC
Permalink
Post by Santosh Shukla
Post by Jan Kiszka
Post by Santosh Shukla
Hi Jean,
Wanted to try out your arm port so few basic/trivial question inline. Thanks.
Post by Jean-Philippe Brucker
QEMU/KVM won't work because ARM doesn't support nested virtualisation.
You will either need a complete emulation or a hardware platform.
I won't have access to an odroid-xu or a model either during the next few
Is this port working on foundation v7 model? Or whole port targeted
for Odroid XU. I am planning to use port for my BE work fist on
foundation model then on few other board like Arndale, Odroid XU3.
We are also looking for an alternative to those Odroids. Did anyone
already worked with some of TI's Keystone II or even their eval board
K2E? I'm particularly interested in the kernel quality and upstream
integration (after the Odroid disaster).
afaict about KS2, they don't have kvm port (I may be wrong bu tlast i
checked with TI folks) up-streamed. Most feasible thing to get base
Well, KVM would be nice to have as reference, but critical would be, of
course, if their boot loader does not support taking over the hyp mode
and this isn't fixable for us.
Post by Santosh Shukla
port of Jailhouse on Versatile or vexpress ARM reference board Or to
have foundational model port, In fact foundation model port would be
very handy and convenient to try out new feature addition, bug fixes
etc.. to an extent. Arndale is another option with rich community kvm
support so I am thinking to work on first foundation model, second
port for Arndale/ ChromeBook.
Unfortunately, you need to hack the hardware to obtain an I/O interface
from a ChromeBook. That's at least true for the Samsung thing we have
here. Arndale Octa could be an alternative.
Post by Santosh Shukla
Post by Jan Kiszka
Regarding the ARM tree: I'm considering to rebase the patches over
current devel head and push them more prominently into the main github
repository, at least in a side branch. For that I need signed-offs by
you, Jean-Philippe, for some commits in your arm-exynos branch. We need
them for the physical test platform, so I'd like to include them even if
they are more rough than the rest of the series.
Okay, I am process to digest Jean port therefore paying more attention
beside looking at setup issue, I'll soon send my feedback
(testing/review/comment etc.) on patch may be early next week. However
keeping one dev/experimental branch would definitely help.
Yep, that's one idea of this.

Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Santosh Shukla
2014-09-24 12:56:35 UTC
Permalink
Post by Jan Kiszka
Post by Santosh Shukla
Post by Jan Kiszka
Post by Santosh Shukla
Hi Jean,
Wanted to try out your arm port so few basic/trivial question inline. Thanks.
Post by Jean-Philippe Brucker
QEMU/KVM won't work because ARM doesn't support nested virtualisation.
You will either need a complete emulation or a hardware platform.
I won't have access to an odroid-xu or a model either during the next few
Is this port working on foundation v7 model? Or whole port targeted
for Odroid XU. I am planning to use port for my BE work fist on
foundation model then on few other board like Arndale, Odroid XU3.
We are also looking for an alternative to those Odroids. Did anyone
already worked with some of TI's Keystone II or even their eval board
K2E? I'm particularly interested in the kernel quality and upstream
integration (after the Odroid disaster).
afaict about KS2, they don't have kvm port (I may be wrong bu tlast i
checked with TI folks) up-streamed. Most feasible thing to get base
Well, KVM would be nice to have as reference, but critical would be, of
course, if their boot loader does not support taking over the hyp mode
and this isn't fixable for us.
Arndale does fine, U-boot switches to hype mode so as v7 foundation
model and Vexpress board too..

Pasting kernel boot log snap of arndale booted in BE mode :

[ 0.036329] CPU: All CPU(s) started in HYP mode.
[ 0.036342] CPU: Virtualization extensions available.
[ 0.036848] devtmpfs: initialized


Perhaps Marc to chime-in :) If he could suggest us better direction to
go after / use reference v7 HW work for jailhouse.
Post by Jan Kiszka
Post by Santosh Shukla
port of Jailhouse on Versatile or vexpress ARM reference board Or to
have foundational model port, In fact foundation model port would be
very handy and convenient to try out new feature addition, bug fixes
etc.. to an extent. Arndale is another option with rich community kvm
support so I am thinking to work on first foundation model, second
port for Arndale/ ChromeBook.
Unfortunately, you need to hack the hardware to obtain an I/O interface
from a ChromeBook. That's at least true for the Samsung thing we have
here. Arndale Octa could be an alternative.
Okay I will look at arndale.
Post by Jan Kiszka
Post by Santosh Shukla
Post by Jan Kiszka
Regarding the ARM tree: I'm considering to rebase the patches over
current devel head and push them more prominently into the main github
repository, at least in a side branch. For that I need signed-offs by
you, Jean-Philippe, for some commits in your arm-exynos branch. We need
them for the physical test platform, so I'd like to include them even if
they are more rough than the rest of the series.
Okay, I am process to digest Jean port therefore paying more attention
beside looking at setup issue, I'll soon send my feedback
(testing/review/comment etc.) on patch may be early next week. However
keeping one dev/experimental branch would definitely help.
Yep, that's one idea of this.
Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jan Kiszka
2014-08-08 13:39:45 UTC
Permalink
Post by Jean-Philippe Brucker
* Virtual interrupts (GICv2 and GICv3)
All physical interrupts are taken to the hypervisor, and then directly
injected into the cell, using a series of List Registers belonging to the
GIC's interface. Software must also maintain a structure of postponed
interrupts, in case all list registers are in use or for the purpose of
injecting SGIs into another CPU.
I'm personally not too familiar with details here, but I listened to
some discussions in a Xen-on-ARM talk where Stefano Stabellini said they
would eventually add direct IRQ injection (Xen reflects them right now
too). Is there anything speaking against this in the context of
Jailhouse, or does it depend on not commonly available hardware features?
Post by Jean-Philippe Brucker
Software-generated interrupts (SGIs) are trapped and moderated by the
hypervisor. GICv2 uses memory-mapped accesses to the distributor, whilst
GICv3 uses system registers. After checking the SGI's targets, they are
stored in the CPUs pending structures, and injected using a
synchronisation SGI across the cores.
Private-Peripheral interrupts (PPIs) are dedicated to each core and
don't need moderation. A first attempt is made to directly write the list
registers, and are stored in the per-cpu data if it fails.
...or is this what we have as direct IRQ delivery on x86?

If yes, do we also have PPIs for a cell- or CPU-local timer IRQ source?
Post by Jean-Philippe Brucker
Shared-Peripheral interrupts (SPIs) are also directly injected, but are
configured in the global GIC distributor to target specific CPUs.
In this port, they are configured from the cell's bitmap: initially
assigned to the root's first CPU, they are re-routed when a cell is
created or destroyed.
All accesses to the distributor are filtered, to only allow the guests to
configure SPIs belonging them.
Missing features
================
* 64bit support, although this series aims to be abstract enough to ease
the 64bit port.
* Thumb2 host: this was not a priority on my TODO list, but should not be
too difficult to add.
* Hosts using PSCI: I did not have access to a boot-monitor with PSCI.
* Clusters: since the setup code currently uses a simple addition of the
MPIDR to deduce the per-cpu datas location and size, clusters are not
supported yet. entry.S will need to fetch the hypervisor's header to
find out the total number of online CPUs and generate those base
addresses, maybe by filling a hashmap.
* Exhaustive reset of the EL1 environment when starting a cell (Perf,
debug features, float...)
The last point sounds familiar when looking at x86... ;)
Post by Jean-Philippe Brucker
* IRQs greater than 64, because of the current bitmap limitation in the
cells configs. More than one irqchip could be used, but it would be
semantically confusing.
This Is there an architectural limit on that number (per irqchip)? In
any case, extending that abstraction should be rather easy.
Post by Jean-Philippe Brucker
* IRQ remapping, although I understood that support may be added in the
core very soon.
See next for the bits required on x86 (including generic PCI logic).
Post by Jean-Philippe Brucker
* Clean platform handling, see below.
Points that need more discussions
=================================
* Linux on ARM heavily relies on Device Trees to describe the different
devices available and their features. The best way to provide a clean
device support in Jailhouse would be for the driver to pass the kernel
device tree in the root cell's configuration.
It would allow to find out the GIC and UART addresses, as well as the
platform-dependant hotplug method and mailbox address, if any.
I need to look at this in details, but I was already wondering if a
Devices Tree (with specific extensions) could once replace the ad-hoc
config file format completely, also on x86.
Post by Jean-Philippe Brucker
* The debug functions are quite problematic: the hypervisor is entered at
EL1 and cannot guess which IO mapping is used by the kernel for the
serial console. As a result, there is no reliable way to print the first
few messages that happen before EL2 initialisation.
Currently, a wild guess assumes that this remapping is the same as the
one used for earlyprintk.
One solution would be to retain all the messages printed at EL1 in a
buffer, but this goes against the 'debug' nature of this printk.
Another would be to communicate, one way or another, the virtual address
of the UART allocated by the kernel from the driver.
So we basically only get printk out after hypervisor activation (i.e. at
the very end of the setup)? What about passing the missing parameters
from the driver via the hypervisor header?

Jan
--
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Marc Zyngier
2014-08-08 13:53:01 UTC
Permalink
Post by Jan Kiszka
Post by Jean-Philippe Brucker
* Virtual interrupts (GICv2 and GICv3)
All physical interrupts are taken to the hypervisor, and then directly
injected into the cell, using a series of List Registers belonging to the
GIC's interface. Software must also maintain a structure of postponed
interrupts, in case all list registers are in use or for the purpose of
injecting SGIs into another CPU.
I'm personally not too familiar with details here, but I listened to
some discussions in a Xen-on-ARM talk where Stefano Stabellini said they
would eventually add direct IRQ injection (Xen reflects them right now
too). Is there anything speaking against this in the context of
Jailhouse, or does it depend on not commonly available hardware features?
Yes, there is a new architecture in the making (GICv4) that will allow
for direct injection of MSI-type interrupts directly into a guest.

But that's still a relative long way away, and both GICv[23] still rely
on software injection of interrupt (the EOI path can be hardware
assisted though).

Thanks,

M.
--
Jazz is not dead. It just smells funny.
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jean-Philippe Brucker
2014-08-08 14:47:31 UTC
Permalink
Post by Jan Kiszka
Post by Jean-Philippe Brucker
* Virtual interrupts (GICv2 and GICv3)
All physical interrupts are taken to the hypervisor, and then directly
injected into the cell, using a series of List Registers belonging to the
GIC's interface. Software must also maintain a structure of postponed
interrupts, in case all list registers are in use or for the purpose of
injecting SGIs into another CPU.
I'm personally not too familiar with details here, but I listened to
some discussions in a Xen-on-ARM talk where Stefano Stabellini said they
would eventually add direct IRQ injection (Xen reflects them right now
too). Is there anything speaking against this in the context of
Jailhouse, or does it depend on not commonly available hardware features?
There is no way to directly inject IRQs with GICv2 and v3. The
hypervisor has to take all IRQs, and associate a virtual ID to the
physical ID by the mean of the list registers.

The guest will take the interrupt, and see the IRQ as pending in its
virtual CPU interface.
Once it has handled the source, it writes the End Of Interrupt register,
accessible from the CPU interface, and the associated physical interrupt
is deactivated in the GIC, so there is no return to the hypervisor once
the IRQ is injected.
Post by Jan Kiszka
Post by Jean-Philippe Brucker
Software-generated interrupts (SGIs) are trapped and moderated by the
hypervisor. GICv2 uses memory-mapped accesses to the distributor, whilst
GICv3 uses system registers. After checking the SGI's targets, they are
stored in the CPUs pending structures, and injected using a
synchronisation SGI across the cores.
Private-Peripheral interrupts (PPIs) are dedicated to each core and
don't need moderation. A first attempt is made to directly write the list
registers, and are stored in the per-cpu data if it fails.
...or is this what we have as direct IRQ delivery on x86?
If yes, do we also have PPIs for a cell- or CPU-local timer IRQ source?
PPIs are indeed used by the CPU-local devices such as the architected timer.
Unfortunately, there is no way to let them pass-through if the
hypervisor needs to trap IPIs, so they are also injected using the list
registers. This injection currently adds an overhead of about 300
instructions for all physical interrupts.
Post by Jan Kiszka
Post by Jean-Philippe Brucker
Shared-Peripheral interrupts (SPIs) are also directly injected, but are
configured in the global GIC distributor to target specific CPUs.
In this port, they are configured from the cell's bitmap: initially
assigned to the root's first CPU, they are re-routed when a cell is
created or destroyed.
All accesses to the distributor are filtered, to only allow the guests to
configure SPIs belonging them.
Missing features
================
* 64bit support, although this series aims to be abstract enough to ease
the 64bit port.
* Thumb2 host: this was not a priority on my TODO list, but should not be
too difficult to add.
* Hosts using PSCI: I did not have access to a boot-monitor with PSCI.
* Clusters: since the setup code currently uses a simple addition of the
MPIDR to deduce the per-cpu datas location and size, clusters are not
supported yet. entry.S will need to fetch the hypervisor's header to
find out the total number of online CPUs and generate those base
addresses, maybe by filling a hashmap.
* Exhaustive reset of the EL1 environment when starting a cell (Perf,
debug features, float...)
The last point sounds familiar when looking at x86... ;)
Post by Jean-Philippe Brucker
* IRQs greater than 64, because of the current bitmap limitation in the
cells configs. More than one irqchip could be used, but it would be
semantically confusing.
This Is there an architectural limit on that number (per irqchip)? In
any case, extending that abstraction should be rather easy.
Yes, there is at most 988 SPIs. IRQs 0-31 are PPIs and SGIs, and IRQs
above 1019 are reserved for spurious values and message-based interrupts.
Post by Jan Kiszka
Post by Jean-Philippe Brucker
* IRQ remapping, although I understood that support may be added in the
core very soon.
See next for the bits required on x86 (including generic PCI logic).
Post by Jean-Philippe Brucker
* Clean platform handling, see below.
Points that need more discussions
=================================
* Linux on ARM heavily relies on Device Trees to describe the different
devices available and their features. The best way to provide a clean
device support in Jailhouse would be for the driver to pass the kernel
device tree in the root cell's configuration.
It would allow to find out the GIC and UART addresses, as well as the
platform-dependant hotplug method and mailbox address, if any.
I need to look at this in details, but I was already wondering if a
Devices Tree (with specific extensions) could once replace the ad-hoc
config file format completely, also on x86.
Device trees also contain the possible CPU map, so passing it in the
configuration could ease the cluster support on arm, and the calculation
of the total size needed for CPU data.
Post by Jan Kiszka
Post by Jean-Philippe Brucker
* The debug functions are quite problematic: the hypervisor is entered at
EL1 and cannot guess which IO mapping is used by the kernel for the
serial console. As a result, there is no reliable way to print the first
few messages that happen before EL2 initialisation.
Currently, a wild guess assumes that this remapping is the same as the
one used for earlyprintk.
One solution would be to retain all the messages printed at EL1 in a
buffer, but this goes against the 'debug' nature of this printk.
Another would be to communicate, one way or another, the virtual address
of the UART allocated by the kernel from the driver.
So we basically only get printk out after hypervisor activation (i.e. at
the very end of the setup)? What about passing the missing parameters
from the driver via the hypervisor header?
The setup on arm needs to activate the hypervisor very early, in
arch_cpu_init, to have access to all the configuration registers.
But we cannot reliably use printk before cpu_init for the moment, that
is before the hypervisor is able to use its own IO mappings.

As yet, I haven't found any nice way to get the iomap used by a linux
driver. But passing this address in the header would indeed solve the
issue.
Attempting to map an alias into a reserved page before entering the
hypervisor would also work.

Thanks,
Jean-Philippe
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Henning Schild
2014-08-12 15:29:00 UTC
Permalink
Jean-Philippe,

thanks for this massive contribution and the verbose description. I did
not look at the details yet and am also no ARM expert. Cool to see the
traction the project gained and i hope we can integrate your changes
soon.

Henning
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Valentine Sinitsyn
2014-08-13 04:31:53 UTC
Permalink
Hi Jean-Philippe,

Sorry for the late reply - I've overlooked the thread as I'm no ARM
expert either.

Congratulations to you and glad to see Jailhouse now supports ARM! It's
especially impressive that you completed a whole new port faster than I
finished amending existing one. :-)

--
Regards,
Valentine Sinitsyn
--
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...