26
Building prplMesh for prplOS
Frederik Van Bogaert edited this page 2024-01-16 10:20:52 +00:00

[[TOC]]

Note: it's often easier to let our CI do the building for you. See this page on instructions about how to do that.

Method 1: Using an ipk file (DEPRECATED)

NOTE about dependencies

Using the ipk file to install/update prplMesh is tricky these days, because prplMesh is heavily dependent on having the right version of pWHM (and to a lesser extent, having a recent-enough version of Ambiorix).

For this reason, using an ipk file to update the prplMesh version in your standard prplOS install is no longer recommended.

Using the ipk file

If you want to build and deploy prplMesh to an existing openWrt/prplWrt installation, you can simply build and install an .ipk file. This is similar to a .deb or .rpm file in desktop linux, or a .msi package on Windows.

Here, we assume you've already built and flashed some version of openWrt or prplOS to the device in question. Follow the instructions from prplOS for how to build and deploy it to your chosen target.

Once that is done, you can simply deploy an ipk file with prplmesh on the device. This has the advantage of not having to rebuild openWrt/prplOS all the time.

Using a pre-built ipk

See the separate page about obtaining images

Alternatively, you can get it from CI directly: go to the pipelines page, then click on the latest pipelines for the master branch. From there, you can choose the job that corresponds to your target.

On the jobs page, you can use the "browse" button on the left to browse the artifacts and find the ipk.

The prebuilt artifacts include the full firmware image and the prplmesh.ipk. The full firmware image includes prplmesh as well, but at an older, released version. Thus, the ipk file has to be installed as well.

The --whm jobs include the PWHM backend.

Building an ipk locally using docker

You can build an ipk locally using for example the following command (replace netgear-rax40 by the target you want to build for) : tools/docker/builder/openwrt/build.sh -d netgear-rax40

This example will generate a "prplmesh*.ipk" file in the ./build/netgear-rax40 directory.

Note that this command will first build the whole OpenWrt SDK and a full firmware image, before building an ipk. In other words, expect a long build time the first time you run it for each target, or every time you need to update the SDK.

On subsequent builds, Docker will use its cache and the build will be much faster (only the time it takes to cross-compile prplMesh + the overhead of using make + building the ipk).

The results directory includes the full firmware image and the prplmesh.ipk. The full firmware image includes prplmesh as well, but at an older, released version. Thus, the ipk file has to be installed as well.

To build the firmware including the PWHM backend, add --whm to the build command: tools/docker/builder/openwrt/build.sh -d netgear-rax40 --whm

Deploying an ipk

The tools/deploy_ipk.sh is a script in tools/ that allows you to:

  • copy the ipk to the target
  • remove any existing prplMesh version
  • install the new one

It can be invoked like this: sh tools/deploy_ipk.sh <path_to_ipk>

Make sure you're not running another opkg command at the same time.

Method 2: Standard prplOS full build with prplMesh (RECOMMENDED)

For this method, you will need to know the feed name for your platform. Please consult this handy table:

Platform Platform feed Artifacts
Intel (Netgear RAX40, NEC AX3000, Axepoint) intel_mips prplos/bin/targets/intel_mips/xrx500/:

AX3000_1600_ETH_11AXUCI_ASURADA-squashfs-fullimage.img (NEC)
AX3000_1600_ETH_11AXUCI_ASURADA-ubifs-fullimage.img
AX3000_1600_ETH_11AXUCI-squashfs-fullimage.img (Axepoint)
AX3000_1600_ETH_11AXUCI-ubifs-fullimage.img
AX6000_2000_ETH_11AXUCI-squashfs-fullimage.img
AX6000_2000_ETH_11AXUCI-ubifs-fullimage.img
NETGEAR_RAX40-squashfs-fullimage.img
NETGEAR_RAX40-ubifs-fullimage.img

AX3000_1600_ETH_11AXUCI_ASURADA-initramfs-kernel.bin
AX3000_1600_ETH_11AXUCI-initramfs-kernel.bin
AX6000_2000_ETH_11AXUCI-initramfs-kernel.bin
NETGEAR_RAX40-initramfs-kernel.bin
NETGEAR_RAX40-squashfs-sysupgrade.bin
NETGEAR_RAX40-ubifs-sysupgrade.bin
Turris Omnia mvebu prplos/bin/targets/mvebu/cortexa9/:

omnia-medkit-openwrt-mvebu-cortexa9-cznic_turris-omnia-initramfs.tar.gz
openwrt-mvebu-cortexa9-cznic_turris-omnia-initramfs-kernel.bin
openwrt-mvebu-cortexa9-cznic_turris-omnia-kernel.bin
openwrt-mvebu-cortexa9-cznic_turris-omnia-sysupgrade.img.gz
Gl.iNet ipq40xx prplos/bin/targets/ipq40xx/generic/:

openwrt-ipq40xx-generic-glinet_gl-b1300-squashfs-sysupgrade.bin
openwrt-ipq40xx-generic-glinet_gl-b1300-initramfs-fit-uImage.itb
WNC/QCA "Haze" ipq807x prplos/bin/targets/ipq807x/generic/:

openwrt-ipq807x-generic-prpl_haze-squashfs-sysupgrade.bin
openwrt-ipq807x-generic-prpl_haze-initramfs-uImage.itb
openwrt-ipq807x-generic-prpl_haze-squashfs-factory.bin
MxL Open Service Platform (URX) mxl_x86_osp_tb341 prplos/bin/targets/intel_x86/lgm/:

openwrt-intel_x86-lgm-PRPL_OSP_TB341-osp_tb341_fullimage.img
openwrt-intel_x86-lgm-PRPL_OSP_TB341-kernel.bin
openwrt-intel_x86-lgm-PRPL_OSP_TB341-squashfs-fs.rootfs
openwrt-intel_x86-lgm-PRPL_OSP_TB341-osp_tb341_pon_fullimage.img
openwrt-intel_x86-lgm-PRPL_OSP_TB341-osp_tb341_pon.dtb
openwrt-intel_x86-lgm-PRPL_OSP_TB341-osp_tb341.dtb

Note that building for the OSP platform requires extra steps and access to MxL git repositories. (see https://gitlab.com/prpl-foundation/prplos/prplos/-/wikis/MaxLinear-Open-Service-Platform)

Image build steps

1. Clone prplOS

git clone git@gitlab.com:prpl-foundation/prplos/prplos.git

2. Configure packages.

Enter into the prplos folder and use the configuration command below:

# Configure prplOS with common prplMesh
./scripts/gen_config.py prpl <platform_feed>

Replace <platform_feed> with the platform feed name of your target, listed in the table above.
Note: to include extra developer tools in the final image (tcpdump, strace, gdb), you can add "debug" as an extra profile while invoking the gen_config.py script.

Example to configure image for Haze with debug tools:

./scripts/gen_config.py prpl ipq807x debug

3. (Optional) Configure a local prplMesh version.

Be default, prplOS will download a predefined, released version of prplMesh. If you want to build local modifications of prplMesh, you have to clone prplMesh separately and prepare the prplOS build for it.

make package/prplmesh/prepare USE_SOURCE_DIR="/home/user/prplMesh" V=s

4. Build prplOS image.

make -j$(nproc)

You can add the flag V=sc to this command for more verbose output in case of problems.

5. Check artifacts

As a result you will get full prplOS image with prplMesh for your platform (see Artifacts column in the table above) These can be used to upgrade the image on your target using uboot or sysupgrade.

Build prplMesh via prplOS toolchain

  1. Clone prplMesh to your filesystem and save path to prplMesh
cd ~/
git clone https://gitlab.com/prpl-foundation/prplmesh/prplMesh.git

# Path to prplMesh is /home/user/prplMesh
  1. Enter in prplOS folder for needed platform and prepare the prplMesh sources
make package/prplmesh/prepare USE_SOURCE_DIR="/home/user/prplMesh" V=s
  1. Build prplMesh package using sources from /home/user/prplMesh
make package/prplmesh/compile V=sc -j"$(nproc)"
  1. As a result you will get prplmesh.ipk package laying in
prplos/staging_dir/packages/PLATFORM/prplmesh_*.ipk

Method 3: Bleeding edge build (using docker)

This method is also used by our CI, and is the most reliable way to get a build for any prplMesh branch (including local changes). It's described under Building an ipk locally using docker, above.

Method 4: Bleeding edge build (without prplOS)

This method is not recommended. It uses a pre-installed toolchain for the target device.

required packages

These are generally required on a standard debian/ubuntu host:

sudo apt install subversion g++ zlib1g-dev build-essential git python time \
                 libncurses5-dev gawk gettext unzip file libssl-dev wget \
                 libelf-dev ecj fastjar java-propose-classpath python3-distutils

clone prplmesh

git clone https://gitlab.com/prpl-foundation/prplmesh/prplMesh.git
cd prplMesh

Prepare ~/external_toolchain.sh

NOTE: Replace the variables below with ones that fit your host PC and target device. These values are for RAX40/Axepoint/NEC AX3000 only

export PRPLMESH_PLATFORM_TYPE=ugw
export PRPLMESH_PLATFORM_BASE_DIR=</path/to/prplos>
export STAGING_DIR=${PRPLMESH_PLATFORM_BASE_DIR}/staging_dir/target-mips_24kc+nomips16_musl
export PRPLMESH_PLATFORM_BUILD_NAME=target-mips_24kc+nomips16_musl
export PRPLMESH_PLATFORM_TOOLCHAIN=toolchain-mips_24kc+nomips16_gcc-8.3.0_musl
export PRPLMESH_PLATFORM_TOOLCHAIN_PREFIX=mips-openwrt-linux-musl-
export CMAKE_TOOLCHAIN_FILE=</path/to/prplMesh>/tools/cmake/toolchain/openwrt.cmake
export STAGING_PREFIX="$STAGING_DIR"
export PATH=$PRPLMESH_PLATFORM_BASE_DIR/staging_dir/host/bin:$PATH

Build & Install

Note: when building for non-intel platforms (not Axepoint, NEC or RAX40), remove /opt/intel from the root path and change the --build parameter

source ~/external_toolchain.sh
cmake -S . -B build-rax40 \
-DCMAKE_BUILD_TYPE=Debug \
-DBWL_TYPE=DWPAL \
-DCMAKE_FIND_ROOT_PATH="${STAGING_DIR}/opt/intel;${STAGING_DIR}" \
-DCMAKE_INSTALL_PREFIX=/opt/prplmesh \
-DCMAKE_TOOLCHAIN_FILE="$CMAKE_TOOLCHAIN_FILE"
 
DESTDIR=$PWD/install-rax40 cmake --build build-rax40 --target install -j$(getconf _NPROCESSORS_ONLN)

Build pwhm backend in prplmesh:

PWHM interacts with the other prplMesh processes on two layers:

  • bwl: providing low level api to control/configure wifi radio/accesspoint/endpoint and also getting events With pwhm backend, these apis are fully relying on parameters and RPCs available in the TR181 Device.WiFi datamodel, exposed by pwhm
  • bpl: providing apis to read local wifi configuration (as alternative to uci and linux bpl backends), also getting info from the datamodel In addition, a wifi configuration monitor has been added to detect local wifi config changes and notify controller, to allow propagating them when needed. It is using the wbapi library (WestBoundAPI) giving Ambiorix client access to the bus, and offering apis to set/get object and parameters and subscribe for events (instance addition/deletion, values changes, custom notifications...)

In order to build prplmesh with pwhm bpl/bwl backend, the build option "CONFIG_USE_PRPLMESH_WHM" must be enabled. This flag will :

  • enable pwhm and its dependencies (libswlc, libswla) if not yet done.
  • build pwhm bwl/bpl backend within prplMesh (libbwl.so, libbpl.so)
  • deploy libwbapi, required to get client access to system bus with Ambiorix.

Here below an example of prplmesh building conf enabling pwhm backends

#
# PrplMesh
#
CONFIG_PACKAGE_prplmesh=y
CONFIG_USE_PRPLMESH_WHM=y
CONFIG_PACKAGE_pwhm=y
CONFIG_PACKAGE_libswla=y
CONFIG_PACKAGE_libswlc=y

#
# Select pwhm build options
#
CONFIG_SAH_LIB_WLD=y
CONFIG_SAH_WLD_INIT_SCRIPT="prplmesh_whm"
CONFIG_SAH_WLD_INIT_ORDER=60

before building:

export CONFIG_USE_PRPLMESH_WHM=y

this flag is part of prpl profile so running

./scripts/gen_config.py prpl

will automatically build pwhm backend, manually adding bellow cmake flags will not be enough:

-DENABLE_NBAPI=ON -DUSE_PRPLMESH_WHM=ON -DUBUS_SOCK="-DAMBIORIX_BACKEND_PATH=\\\"/usr/bin/mods/amxb/mod-amxb-ubus.so\\\" -DAMBIORIX_BUS_URI=\\\"ubus:/var/run/ubus.sock\\\"" -DWBAPI_SOCK="-DAMBIORIX_WBAPI_BACKEND_PATH=\\\"/usr/bin/mods/amxb/mod-amxb-ubus.so\\\" -DAMBIORIX_WBAPI_BUS_URI=\\\"ubus:/var/run/ubus.sock\\\""