Tutorial: Write your own nodes¶
Within MARV, nodes are responsible to extract and process data from your log files as base for filtering and visualization. MARV Robotics already ships with a set of nodes (marv_robotics
). Here, you find a quick tutorial on writing your own.
All code and configuration of this tutorial is included with your release of MARV Robotics EE in the top-level tutorial
folder.
Prerequisites¶
$ ls -1
scanroot # holds bag files (Setup basic site)
site # holds config and databases (Setup basic site)
tutorial # link to tutorial directory
venv # python virtualenv (Installation)
Create python package¶
First, you need a python package to hold the code of your nodes. Don’t worry too much about the name: nodes can be easily moved to other packages later on:
$ mkdir mynodes # directory holding python distribution
$ cp tutorial/code/setup.py mynodes/
$ mkdir mynodes/mynodes # directory holding python package
$ touch mynodes/mynodes/__init__.py
It might make sense that the distribution directory name matches the name provided in setup.py
(see below). There, also the python package directory is listed as packages
– it must not contain dashes but may contain underscores. One python distribution can contain many packages. At some point you might want to dive into Python Packaging
We placed the Python code of this tutorial into the public domain, so you can freely pick from it. Beware, not to copy the license file and headers and adjust setup.py
accordingly, except if you intend to release your code into the public domain as well:
setup.py
# -*- coding: utf-8 -*-
#
# This file is part of MARV Tutorial Code by Ternaris
#
# To the extent possible under law, the person who associated CC0
# with MARV Tutorial Code has waived all copyright and related or
# neighboring rights to MARV Tutorial Code.
#
# You should have received a copy of the CC0 legalcode along with
# this work. If not, see
# <http://creativecommons.org/publicdomain/zero/1.0/>.
from __future__ import absolute_import, division, print_function
from setuptools import setup
setup(name='marv-tutorial-code',
version='1.0',
description='MARV Tutorial Code',
url='',
author='Ternaris',
author_email='team@ternaris.com',
license='CC0',
packages=['marv_tutorial'],
install_requires=['marv',
'marv-robotics',
'matplotlib',
'mpld3'],
include_package_data=True,
zip_safe=False)
The only purpose of the __future__
imports here is to get you into the habit of not forgetting them.
Next, it’s a good idea to place this code under version control:
$ cp tutorial/code/.gitignore mynodes/
$ cd mynodes
$ git init
$ git add .
$ git commit -m 'initial'
$ cd -
Finally, for marv to make use of your nodes, you need to install the package into the virtual python enviroment. Install it in development mode (-e
) for changes to be picked up without the need to reinstall. Activate the virtualenv first, if it is not already activated. Most of the time we use just $
as prompt you can run the commands also with an activated virtualenv, creating a virtualenv being a notable exception. In case we use (venv) $
as prompt, it has to be activated:
$ source venv/bin/activate
(venv) $ pip install -e mynodes
First node: Extract an image¶
For sake of simplicity, we are placing all code directly into the package’s __init__.py
. Later you might want to split this up into individual modules or packages. Following is the full code of the image extraction node. We’ll be dissecting it shortly.
marv_tutorial/__init__.py
from __future__ import absolute_import, division, print_function
import json
import os
import cv2
import cv_bridge
import matplotlib; matplotlib.use('Agg')
import matplotlib.pyplot as plt
import mpld3
imgmsg_to_cv2 = cv_bridge.CvBridge().imgmsg_to_cv2
import marv
from marv_nodes.types_capnp import File
from marv_detail.types_capnp import Section, Widget
from marv_robotics.bag import get_message_type, raw_messages
TOPIC = '/wide_stereo/left/image_rect_throttle'
@marv.node(File)
@marv.input('cam', marv.select(raw_messages, TOPIC))
def image(cam):
"""Extract first image of input stream to jpg file.
Args:
cam: Input stream of raw rosbag messages.
Returns:
File instance for first image of input stream.
"""
# Set output stream title and pull first message
yield marv.set_header(title=cam.topic)
msg = yield marv.pull(cam)
if msg is None:
return
# Deserialize raw ros message
pytype = get_message_type(cam)
rosmsg = pytype()
rosmsg.deserialize(msg.data)
# Write image to jpeg and push it to output stream
name = '{}.jpg'.format(cam.topic.replace('/', ':')[1:])
imgfile = yield marv.make_file(name)
img = imgmsg_to_cv2(rosmsg, "rgb8")
cv2.imwrite(imgfile.path, img, (cv2.IMWRITE_JPEG_QUALITY, 60))
yield marv.push(imgfile)
At first glance, there are three blocks of imports: python standard library, external libraries, and own project. Further, there we define a topic used during the tutorial and a node seems to be based on a Python generator function that uses Yield expressions
.
Let’s look at this, piece-by-piece.
Declare image node¶
@marv.node(File)
@marv.input('cam', marv.select(raw_messages, TOPIC))
def image(cam):
"""Extract first image of input stream to jpg file.
Args:
cam: Input stream of raw rosbag messages.
Returns:
File instance for first image of input stream.
"""
We are declaring a marv.node
using decorator syntax based on a function named image
, which becomes also the name of the node. The node will output File
messages and consume a selected topic of raw messages as input stream cam
. According to the docstring it will return the first image of this stream. The docstring is following the Google Python Style Guide which is understood by Sphinx using Napoleon to generate documentation.
Yield to interact with marv¶
# Set output stream title and pull first message
yield marv.set_header(title=cam.topic)
msg = yield marv.pull(cam)
if msg is None:
return
The input stream’s topic is set as title for the image node’s output stream and we are pulling the first message from the input stream. In case there is none, we simply return without publishing anything.
Yield expressions
turn Python functions into generator functions. In short: yield
works like return
, but preserves the function state to enable the calling context – the marv framework – to reactivate the generator function and resume operation where it left as if it were a function call with optional return value. In case of the second line marv sends the first message of the cam
input stream as response to the marv.pull
, which will be assigned to the msg
variable and operation continues within the image node until the next yield
statement or the end of the function.
Deserialize raw message¶
# Deserialize raw ros message
pytype = get_message_type(cam)
rosmsg = pytype()
rosmsg.deserialize(msg.data)
The raw_messages
node pushes raw ROS messages, which have to be deserialized using the correct message type returned by get_message_type
.
Write image to file¶
# Write image to jpeg and push it to output stream
name = '{}.jpg'.format(cam.topic.replace('/', ':')[1:])
imgfile = yield marv.make_file(name)
img = imgmsg_to_cv2(rosmsg, "rgb8")
cv2.imwrite(imgfile.path, img, (cv2.IMWRITE_JPEG_QUALITY, 60))
yield marv.push(imgfile)
Define name for the image file and instruct marv to create a file in its store. Then transform the ros image message into an opencv image and save it to the file. Finally, push the file to the output stream for consumers of our image node to pull it.
Next, we’ll create a detail section that pulls and displays this image.
Show image in detail section¶
In order to show an image in a detail section, the section needs to be coded and added to the configuration along with the image node created in the previous section.
Code¶
@marv.node(Section)
@marv.input('title', default='Image')
@marv.input('image', default=image)
def image_section(image, title):
"""Create detail section with one image.
Args:
title (str): Title to be displayed for detail section.
image: marv image file.
Returns
One detail section.
"""
# pull first image
img = yield marv.pull(image)
if img is None:
return
# create image widget and section containing it
widget = {'title': image.title, 'image': {'src': img.relpath}}
section = {'title': title, 'widgets': [widget]}
yield marv.push(section)
The image_section
node’s output stream contains messages of type Section
. It consumes one input parameter title
with a default value of Image
as well as the output stream of the image
node declared previously. In case the image
node did not push any message to its output stream, we simply return, without creating a section.
Otherwise, a widget of type image
is created and finally a section containing this image is pushed to the output stream.
Next, we are adding our nodes to the configuration.
Config¶
marv.conf
[marv]
collections = bags
[collection bags]
scanner = marv_robotics.bag:scan
scanroots =
../scanroot
nodes =
marv_nodes:dataset
marv_robotics.bag:bagmeta
marv_robotics.detail:bagmeta_table
marv_robotics.detail:connections_section
marv_tutorial:image
marv_tutorial:image_section
detail_summary_widgets =
bagmeta_table
detail_sections =
connections_section
image_section
Note
Remember to stop uwsgi
, run marv init
, and start uwsgi
again.
Run nodes¶
(venv:~/site) $ marv run --collection=bags
INFO marv.run qmflhjcp6j.image_section.io4thnkdxx.default (image_section) started
INFO marv.run qmflhjcp6j.image.og54how3rb.default (image) started
INFO marv.run qmflhjcp6j.image.og54how3rb.default finished
INFO marv.run qmflhjcp6j.image_section.io4thnkdxx.default finished
INFO marv.run vmgpndaq6f.image_section.io4thnkdxx.default (image_section) started
INFO marv.run vmgpndaq6f.image.og54how3rb.default (image) started
INFO marv.run vmgpndaq6f.image.og54how3rb.default finished
INFO marv.run vmgpndaq6f.image_section.io4thnkdxx.default finished
Et voilà. Reload your browser (http://localhost:8000) and you should see the detail section with an image. Let’s extract multiple images!
Display gallery of images¶
To display a gallery of images, we’ll be using another two nodes, again one for extraction and one for the section and add them to the configuration.
Code¶
@marv.node(File)
@marv.input('cam', marv.select(raw_messages, TOPIC))
def images(cam):
"""Extract images from input stream to jpg files.
Args:
cam: Input stream of raw rosbag messages.
Returns:
File instances for images of input stream.
"""
# Set output stream title and pull first message
yield marv.set_header(title=cam.topic)
# Fetch and process first 20 image messages
name_template = '%s-{}.jpg' % cam.topic.replace('/', ':')[1:]
while True:
idx, msg = yield marv.pull(cam, enumerate=True)
if msg is None or idx >= 20:
break
# Deserialize raw ros message
pytype = get_message_type(cam)
rosmsg = pytype()
rosmsg.deserialize(msg.data)
# Write image to jpeg and push it to output stream
img = imgmsg_to_cv2(rosmsg, "rgb8")
name = name_template.format(idx)
imgfile = yield marv.make_file(name)
cv2.imwrite(imgfile.path, img)
yield marv.push(imgfile)
Instead of only the first image, we now want to extract the first 20 images. We are using a while
loop and are breaking if either the input stream is exhausted or 20 images are extracted – marv.pull.enumerate
saves us from counting manually. A name_template
together with the index produces unique and meaningful filenames to store the images. All other elements you know already from the images
node above. Let’s create a section to display the images.
@marv.node(Section)
@marv.input('title', default='Gallery')
@marv.input('images', default=images)
def gallery_section(images, title):
"""Create detail section with gallery.
Args:
title (str): Title to be displayed for detail section.
images: stream of marv image files
Returns
One detail section.
"""
# pull all images
imgs = []
while True:
img = yield marv.pull(images)
if img is None:
break
imgs.append({'src': img.relpath})
if not imgs:
return
# create gallery widget and section containing it
widget = {'title': images.title, 'gallery': {'images': imgs}}
section = {'title': title, 'widgets': [widget]}
yield marv.push(section)
The gallery_section
depends on the just created images
node. To pull all images, it also uses a while loop and while in the image_section
we returned an image
widget, this time we use a gallery
widget with a list of images
. Let’s add the new nodes to the config file and run them. Marv determines which nodes’ output is missing from the store and runs only these. By default it checks all nodes being listed in detail_summary_widgets
and detail_sections
. Actually, there are two more config keys, but they will be part of a future tutorial. Dependencies will be automatically added as needed.
Config¶
[marv]
collections = bags
[collection bags]
scanner = marv_robotics.bag:scan
scanroots =
../scanroot
nodes =
marv_nodes:dataset
marv_robotics.bag:bagmeta
marv_robotics.detail:bagmeta_table
marv_robotics.detail:connections_section
marv_tutorial:image
marv_tutorial:image_section
marv_tutorial:images
marv_tutorial:gallery_section
detail_summary_widgets =
bagmeta_table
detail_sections =
connections_section
image_section
gallery_section
Note
Remember to stop uwsgi
, run marv init
, and start uwsgi
again.
(venv:~/site) $ marv run --collection=bags
INFO marv.run qmflhjcp6j.gallery_section.oamfub7jpa.default (gallery_section) started
INFO marv.run qmflhjcp6j.images.og54how3rb.default (images) started
INFO marv.run qmflhjcp6j.images.og54how3rb.default finished
INFO marv.run qmflhjcp6j.gallery_section.oamfub7jpa.default finished
INFO marv.run vmgpndaq6f.gallery_section.oamfub7jpa.default (gallery_section) started
INFO marv.run vmgpndaq6f.images.og54how3rb.default (images) started
INFO marv.run vmgpndaq6f.images.og54how3rb.default finished
INFO marv.run vmgpndaq6f.gallery_section.oamfub7jpa.default finished
Et voilà. Reload your browser (http://localhost:8000) and you should see the gallery section.
Let’s move to the final piece of this tutorial: a section combining multiple widgets and introducing two more widget types: tables and plots.
Combined: table, plot and gallery¶
In the final section we want to display a table that lists name and size of the image files, a plot of the filesizes, and again the gallery. To this end we create a stream of filesizes, the plot and the combined section:
@marv.node()
@marv.input('images', default=images)
def filesizes(images):
"""Stat filesize of files.
Args:
images: stream of marv image files
Returns:
Stream of filesizes
"""
# Pull each image and push its filesize
while True:
img = yield marv.pull(images)
if img is None:
break
yield marv.push(img.size)
“Computing” the filesizes is so cheap, that we do not want to store the node’s output and therefore don’t need to specify a schema and are able to return arbitrary python objects (more on this later).
@marv.node(Widget)
@marv.input('filesizes', default=filesizes)
def filesize_plot(filesizes):
# Pull all filesizes
sizes = []
while True:
size = yield marv.pull(filesizes)
if size is None:
break
sizes.append(size)
# plot
fig = plt.figure()
axis = fig.add_subplot(1, 1, 1)
axis.plot(sizes, 'bo')
# EE: save figure to file
plotfile = yield marv.make_file('filesizes.json')
with open(plotfile.path, 'w') as f:
json.dump(mpld3.fig_to_dict(fig), f)
# EE: create plot widget referencing file
widget = {
'title': 'Filesizes',
'mpld3': 'marv-partial:{}'.format(plotfile.relpath),
}
# Alternative code for community edition
#plotfile = yield marv.make_file('filesizes.jpg')
#fig.savefig(plotfile.path)
#widget = {
# 'title': 'Filesizes',
# 'image': {'src': plotfile.relpath},
#}
yield marv.push(widget)
We use matplotlib to create plots and mpld3 for serialization to a file and visualization in the browser (EE-only, for CE use the image widget, see above). The file is referenced by the mpld3
widget using marv-partial
. This reduces the size of the detail view as the plot data is only loaded once the contents of the section referencing it are displayed. An alternative would be to embed the plot data directlt into the plot. This is the only mode of operation for the mpld3
widget and it is the only widget supporting this feature so far.
@marv.node(Section)
@marv.input('title', default='Combined')
@marv.input('images', default=images)
@marv.input('filesizes', default=filesizes)
@marv.input('filesize_plot', default=filesize_plot)
def combined_section(title, images, filesizes, filesize_plot):
# A gallery of images
imgs = []
gallery = {'title': images.title, 'gallery': {'images': imgs}}
# A table with two columns
rows = []
columns = [{'title': 'Name', 'formatter': 'rellink'},
{'title': 'Size', 'formatter': 'filesize'}]
table = {'table': {'columns': columns, 'rows': rows}}
# pull images and filesizes synchronously
while True:
img, filesize = yield marv.pull_all(images, filesizes)
if img is None:
break
imgs.append({'src': img.relpath})
rows.append({'cells': [
{'link': {'href': img.relpath,
'title': os.path.basename(img.relpath)}},
{'uint64': filesize},
]})
# pull filesize_plot AFTER individual messages
plot = yield marv.pull(filesize_plot)
# section containing multiple widgets
section = {'title': title, 'widgets': [table, plot, gallery]}
yield marv.push(section)
New is the table
widget here, defined by rows
and columns
. The columns have title to be displayed for the column as well as a formatter that is responsible to display the content of each cell of a column. A row has cells
, which is a list of values being formatted by the formatters.
[marv]
collections = bags
[collection bags]
scanner = marv_robotics.bag:scan
scanroots =
../scanroot
nodes =
marv_nodes:dataset
marv_robotics.bag:bagmeta
marv_robotics.detail:bagmeta_table
marv_robotics.detail:connections_section
marv_tutorial:image
marv_tutorial:image_section
marv_tutorial:images
marv_tutorial:gallery_section
marv_tutorial:combined_section
detail_summary_widgets =
bagmeta_table
detail_sections =
connections_section
image_section
gallery_section
combined_section
Note
Remember to stop uwsgi
, run marv init
, and start uwsgi
again.
$ marv run --collection=bags
INFO marv.run qmflhjcp6j.combined_section.ft6zlxpbvn.default (combined_section) started
ERRO marv.cli Exception occured for SetID('qmflhjcp6j3hsq7e56xzktf3yq'):
Traceback (most recent call last):
...
marv_node.driver.MakeFileNotSupported: <VolatileStream qmflhjcp6j.filesize_plot.cpenbxihfq.default>
There was an error: The filesize_plot
requested to make a file, but marv failed to follow through. Nodes can be persistent or volatile. Persistent nodes are stored in marv’s store, need to declare a schema (i.e. Section
or Widget
) and be listed in marv.conf
. Volatile nodes need none of that and are run every time somebody needs them. The filesize
node for example is cheap to run, even pointless beyond the scope of a tutorial, and therefore volatile: not listed in marv.conf
and declaring no schema, just @marv.node()
.
For nodes to be able to make files, they need to be persistent. We forgot to add filesize_plot
to marv.conf
:
[marv]
collections = bags
[collection bags]
scanner = marv_robotics.bag:scan
scanroots =
../scanroot
nodes =
marv_nodes:dataset
marv_robotics.bag:bagmeta
marv_robotics.detail:bagmeta_table
marv_robotics.detail:connections_section
marv_tutorial:image
marv_tutorial:image_section
marv_tutorial:images
marv_tutorial:gallery_section
marv_tutorial:filesize_plot
marv_tutorial:combined_section
detail_summary_widgets =
bagmeta_table
detail_sections =
connections_section
image_section
gallery_section
combined_section
Note
Remember to stop uwsgi
, run marv init
, and start uwsgi
again.
$ marv run --collection=bags
INFO marv.run qmflhjcp6j.combined_section.ft6zlxpbvn.default (combined_section) started
INFO marv.run qmflhjcp6j.filesize_plot.cpenbxihfq.default (filesize_plot) started
INFO marv.run qmflhjcp6j.filesize_plot.cpenbxihfq.default finished
INFO marv.run qmflhjcp6j.combined_section.ft6zlxpbvn.default finished
INFO marv.run vmgpndaq6f.bagmeta_table.gahvdc4vpg.default (bagmeta_table) started
INFO marv.run vmgpndaq6f.combined_section.ft6zlxpbvn.default (combined_section) started
INFO marv.run vmgpndaq6f.gallery_section.oamfub7jpa.default (gallery_section) started
INFO marv.run vmgpndaq6f.image_section.io4thnkdxx.default (image_section) started
INFO marv.run vmgpndaq6f.connections_section.yjrewalqzc.default (connections_section) started
INFO marv.run vmgpndaq6f.bagmeta.dwz4xbykdt.default (bagmeta) started
INFO marv.run vmgpndaq6f.filesize_plot.cpenbxihfq.default (filesize_plot) started
INFO marv.run vmgpndaq6f.images.og54how3rb.default (images) started
INFO marv.run vmgpndaq6f.image.og54how3rb.default (image) started
INFO marv.run vmgpndaq6f.bagmeta.dwz4xbykdt.default finished
INFO marv.run vmgpndaq6f.connections_section.yjrewalqzc.default finished
INFO marv.run vmgpndaq6f.bagmeta_table.gahvdc4vpg.default finished
INFO marv.run vmgpndaq6f.image.og54how3rb.default finished
INFO marv.run vmgpndaq6f.image_section.io4thnkdxx.default finished
INFO marv.run vmgpndaq6f.images.og54how3rb.default finished
INFO marv.run vmgpndaq6f.gallery_section.oamfub7jpa.default finished
INFO marv.run vmgpndaq6f.filesize_plot.cpenbxihfq.default finished
INFO marv.run vmgpndaq6f.combined_section.ft6zlxpbvn.default finished
Persistent nodes and custom output types¶
If nodes do not declare an output message type @marv.node()
they are volatile, will run each time somebody needs them, and they can output arbitrary python objects. In order to use node output in listing_columns or filters, the node needs to be persistent. In order to persist a node in the store it needs to declare an output type @marv.node(TYPE)
and be listed in nodes. MARV uses capnp to serialize and persist messages and ships with a couple of pre-defined types, which are available via marv.types
. Please take a look at that module and the capnp files it is importing from.
In order to create your own capnp message types, place a module.capnp
next to your module.py
and take a look at the capnp files shipping with marv as well as the capnp schema language.
Summary¶
You learned to create a python package and wrote your first nodes to extract images, create a plot and table, and display these in detail sections.
Happy coding!