Canonical joins Linaro and co-founds LITE project

Canonical joins Linaro as one of the founding members of the LITE project, fostering collaboration and interoperability in the  IoT and embedded space. “Linaro the collaborative engineering organization developing open source software for the ARM® architecture, today announced the launch of the Linaro IoT and Embedded (LITE) Segment Group. Working in collaboration with industry leaders, […]

Rocket.chat, a new snap is landing on your Nextcloud box and beyond!

Last week Nextcloud, Western Digital and Canonical launched the Nextcloud box, a simple box powered by a Raspberry Pi to easily build a private cloud storage service at home . With the Nextcloud box, you are already securely sharing files with your colleagues, friends, and/or family. The team at Rocket.chat has been asking “Why not […]

Monitoring “big software” stacks with the Elastic Stack

Big​ ​Software​ ​is​ ​a​ ​new​ ​class​ ​of​ ​application.​ ​It’s​ ​composed​ ​of​ ​so​ ​many moving​ ​pieces​ ​that​ ​humans,​ ​by​ ​themselves,​ ​cannot​ ​design,​ ​deploy​ ​or operate​ ​them.​ ​OpenStack,​ ​Hadoop​ ​and​ ​container-based​ ​architectures​ ​are all​ ​examples​ ​of​ ​​Big​ ​Software​. Gathering​ ​service​ ​metrics​ ​for​ ​complex​ ​big​ ​software​ ​stacks​ ​can​ ​be​ ​a​ ​chore Especially​ ​when​ ​you​ ​need​ ​to​ ​warehouse,​ ​visualize,​ […]

Leostream Joins Canonical’s Charm partner programme

Leostream Corporation, a leading developer of hosted desktop connection management software, has joined the Charm partner programme to facilitate the deployment of virtual desktops on Ubuntu OpenStack. The partner program helps solution providers make best use of Canonical’s model driven operations system, Juju; enabling instant workload deployment, integration, and scaling on any public or private cloud, as […]

Low Graphics Mode in Unity 7

Unity 7 has had a low graphics mode for a long time but recently we’ve been making it better. Improvements have been made to reduce the amount of visual effects that are seen while running in low graphics mode.  At a high level this includes things like: Reducing the amount of animation in elements such as the window switcher, launcher and menus […]

Ubuntu OpenStack is available today on all IBM Servers

IBM and Canonical Expand Hybrid Cloud Alliance as Ubuntu OpenStack is available today on all IBM Servers Ubuntu OpenStack support included with Canonical’s Ubuntu Advantage enterprise-grade support offering. Ubuntu OpenStack is the only OpenStack offered by a commercial Linux distributor that supports IBM Power Systems today. LONDON, U.K, September 19, 2016: Canonical announces today that […]

Nextcloud Box – a private cloud and IoT solution for home users

Companies launch Raspberry Pi-based device to enable consumers greater control over their data Stuttgart, Germany – September 16, 2016 – Nextcloud, Canonical and WDLabs, are today launching the Nextcloud Box, a secure, private, self-hosted cloud and Internet of Things (IoT) platform giving consumers a way to take back control over their personal data. Nextcloud Box […]

The Future Of Your Smart Home

You can’t hide from the Internet of Things [IoT] any more. IoT is mentioned in the press almost weekly now, if not daily. Even your grandmother is talking about IoT. Smart thermostats, devices that listen to and understand you, buttons you push to get products delivered, smart televisions, fridges, washing machines, vacuum cleaners, drones, robots […]

Why open source matters to the IoT market

With everything becoming connected through IoT, security will be key to its success. And the best way to secure a system is to allow anybody to inspect the code and contribute a patch. The Internet of Things (IoT) has the potential to become a $4-11 trillion market by 2025, contributing 11% to the global economy, […]

Big Data Gets Super-Fast with Ubuntu on Bigstep Metal Cloud

Bigstep gains Ubuntu Public Cloud certification Ubuntu Server images are optimized for performance, security, dependability on Bigstep Metal Cloud, with enterprise security updates Ubuntu Advantage support available through the Bigstep console CHICAGO, U.S and LONDON, U.K. Sept 14 – Bigstep, the big data cloud provider, and Canonical, the company behind Ubuntu, the leading platform and […]

Getting started: Creating Ubuntu Apps with Cordova

A few weeks ago we participated in Phonegap Day EU. It was a great opportunity to meet with the Cordova development team and we had a range of app developers gather for this occasion. The latest Ubuntu 16.04 LTS release was demoed, running on a brand new BQ M10 tablet in convergence mode. Creating responsive […]

Digital Signage meets IoT Series continues!

Last month we kicked off our first in our Digital Signage meets IoT series focusing on Building success with a RaspberryPi. It was a great session from Screenly founder, Viktor Petersson and Sixteen:Nine editor Dave Haynes who touched on industry trends, disruptive effects on hardware costs and adoption of agile software. If you missed the […]

e-shelter and Canonical launch Joint Managed OpenStack Private Cloud

Quick, easy and cost-predictable way to get a fully functional OpenStack cloud Built on Ubuntu OpenStack and using application management tools Juju and MaaS Fully managed cloud service takes away the hassle of day to day IT operations Frankfurt, Germany and London, U.K. 5th July, 2016 – e-shelter, leading data center operator in Europe and […]

Certified Ubuntu Images Available in SoftLayer

We are excited to announce today that SoftLayer, an IBM Company and a world-leading IaaS provider, is now an Ubuntu Certified Public Cloud partner for Ubuntu guest images. For users, this means you can now harness the value of the best Ubuntu user experience, optimized for SoftLayer bare metal and virtual servers: Ubuntu cloud guest […]

HOWTO: Host your own SNAP store!

SNAPs are the cross-distro, cross-cloud, cross-device Linux packaging format of the future.  And we’re already hosting a fantastic catalog of SNAPs in the SNAP store provided by Canonical.  Developers are welcome to publish their software for…

HOWTO: Classic, apt-based Ubuntu 16.04 LTS Server on the rpi2!

Classic Ubuntu 16.04 LTS, on an rpi2Hopefully by now you’re well aware of Ubuntu Core — the snappiest way to run Ubuntu on a Raspberry Pi…But have you ever wanted to run classic (apt/deb) Ubuntu Server on a RaspberryPi2?Well, you’re in luck! &n…

10 cool accessories for the Aquaris M10 Ubuntu Edition tablet

In the office we’ve been using the Aquaris M10 Ubuntu Edition tablet and have a few accessories to recommend with it! Check out the list below. Bluetooth Keyboard Bluetooth mouse Or even this bluetooth mouse! 4-Port Micro USB to USB Hub Micro-HDMI to HDMI Cable NexDock – world’s most affordable laptop that converges with the […]

LXD 2.0: Remote hosts and container migration [6/12]

This is the sixth blog post in this series about LXD 2.0.

LXD logo

Remote protocols

LXD 2.0 supports two protocols:

  • LXD 1.0 API: That’s the REST API used between the clients and a LXD daemon as well as between LXD daemons when copying/moving images and containers.
  • Simplestreams: The Simplestreams protocol is a read-only, image-only protocol used by both the LXD client and daemon to get image information and import images from some public image servers (like the Ubuntu images).

Everything below will be using the first of those two.

Security

Authentication for the LXD API is done through client certificate authentication over TLS 1.2 using recent ciphers. When two LXD daemons must exchange information directly, a temporary token is generated by the source daemon and transferred through the client to the target daemon. This token may only be used to access a particular stream and is immediately revoked so cannot be re-used.

To avoid Man In The Middle attacks, the client tool also sends the certificate of the source server to the target. That means that for a particular download operation, the target server is provided with the source server URL, a one-time access token for the resource it needs and the certificate that the server is supposed to be using. This prevents MITM attacks and only give temporary access to the object of the transfer.

Network requirements

LXD 2.0 uses a model where the target of an operation (the receiving end) is connecting directly to the source to fetch the data.

This means that you must ensure that the target server can connect to the source directly, updating any needed firewall along the way.

We have a plan to allow this to be reversed and also to allow proxying through the client itself for those rare cases where draconian firewalls are preventing any communication between the two hosts.

Interacting with remote hosts

Rather than having our users have to always provide hostname or IP addresses and then validating certificate information whenever they want to interact with a remote host, LXD is using the concept of “remotes”.

By default, the only real LXD remote configured is “local:” which also happens to be the default remote (so you don’t have to type its name). The local remote uses the LXD REST API to talk to the local daemon over a unix socket.

Adding a remote

Say you have two machines with LXD installed, your local machine and a remote host that we’ll call “foo”.

First you need to make sure that “foo” is listening to the network and has a password set, so get a remote shell on it and run:

lxc config set core.https_address [::]:8443 lxc config set core.trust_password something-secure

Now on your local LXD, we just need to make it visible to the network so we can transfer containers and images from it:

lxc config set core.https_address [::]:8443

Now that the daemon configuration is done on both ends, you can add “foo” to your local client with:

lxc remote add foo 1.2.3.4

(replacing 1.2.3.4 by your IP address or FQDN)

You’ll see something like this:

stgraber@dakara:~$ lxc remote add foo 2607:f2c0:f00f:2770:216:3eff:fee1:bd67 Certificate fingerprint: fdb06d909b77a5311d7437cabb6c203374462b907f3923cefc91dd5fce8d7b60 ok (y/n)? y Admin password for foo: Client certificate stored at server: foo

You can then list your remotes and you’ll see “foo” listed there:

stgraber@dakara:~$ lxc remote list +-----------------+-------------------------------------------------------+---------------+--------+--------+ | NAME | URL | PROTOCOL | PUBLIC | STATIC | +-----------------+-------------------------------------------------------+---------------+--------+--------+ | foo | https://[2607:f2c0:f00f:2770:216:3eff:fee1:bd67]:8443 | lxd | NO | NO | +-----------------+-------------------------------------------------------+---------------+--------+--------+ | images | https://images.linuxcontainers.org:8443 | lxd | YES | NO | +-----------------+-------------------------------------------------------+---------------+--------+--------+ | local (default) | unix:// | lxd | NO | YES | +-----------------+-------------------------------------------------------+---------------+--------+--------+ | ubuntu | https://cloud-images.ubuntu.com/releases | simplestreams | YES | YES | +-----------------+-------------------------------------------------------+---------------+--------+--------+ | ubuntu-daily | https://cloud-images.ubuntu.com/daily | simplestreams | YES | YES | +-----------------+-------------------------------------------------------+---------------+--------+--------+

Interacting with it

Ok, so we have a remote server defined, what can we do with it now?

Well, just about everything you saw in the posts until now, the only difference being that you must tell LXD what host to run against.

For example:

lxc launch ubuntu:14.04 c1

Will run on the default remote (“lxc remote get-default”) which is your local host.

lxc launch ubuntu:14.04 foo:c1

Will instead run on foo.

Listing running containers on a remote host can be done with:

stgraber@dakara:~$ lxc list foo: +------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+-----------------------------------------------+------------+-----------+ | c1 | RUNNING | 10.245.81.95 (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe43:7994 (eth0) | PERSISTENT | 0 | +------+---------+---------------------+-----------------------------------------------+------------+-----------+

One thing to keep in mind is that you have to specify the remote host for both images and containers. So if you have a local image called “my-image” on “foo” and want to create a container called “c2” from it, you have to run:

lxc launch foo:my-image foo:c2

Finally, getting a shell into a remote container works just as you would expect:

lxc exec foo:c1 bash

Copying containers

Copying containers between hosts is as easy as it sounds:

lxc copy foo:c1 c2

And you’ll have a new local container called “c2” created from a copy of the remote “c1” container. This requires “c1” to be stopped first, but you could just copy a snapshot instead and do it while the source container is running:

lxc snapshot foo:c1 current lxc copy foo:c1/current c3

Moving containers

Unless you’re doing live migration (which will be covered in a later post), you have to stop the source container prior to moving it, after which everything works as you’d expect.

lxc stop foo:c1 lxc move foo:c1 local:

This example is functionally identical to:

lxc stop foo:c1 lxc move foo:c1 c1

How this all works

Interactions with remote containers work as you would expect, rather than using the REST API over a local Unix socket, LXD just uses the exact same API over a remote HTTPs transport.

Where it gets a bit trickier is when interaction between two daemons must occur, as is the case for copy and move.

In those cases the following happens:

  1. The user runs “lxc move foo:c1 c1”.
  2. The client contacts the local: remote to check for an existing “c1” container.
  3. The client fetches container information from “foo”.
  4. The client requests a migration token from the source “foo” daemon.
  5. The client sends that migration token as well as the source URL and “foo”‘s certificate to the local LXD daemon.
  6. The local LXD daemon then connects directly to “foo” using the provided token
    1. It connects to a first control websocket
    2. It negotiates the filesystem transfer protocol (zfs send/receive, btrfs send/receive or plain rsync)
    3. If available locally, it unpacks the image which was used to create the source container. This is to avoid needless data transfer.
    4. It then transfers the container and any of its snapshots as a delta.
  7. If succesful, the client then instructs “foo” to delete the source container.

Try all this online

Don’t have two machines to try remote interactions and moving/copying containers?

That’s okay, you can test it all online using our demo service.
The included step-by-step walkthrough even covers it!

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net

LXD networking: lxdbr0 explained

Recently, LXD stopped depending on lxc, and thus moved to using its own bridge, called lxdbr0. lxdbr0 behaves significantly differently than lxcbr0: it is ipv6 link local only by default (i.e. there is no ipv4 or ipv6 subnet configured by default), and only HTTP traffic is proxied over the network. This means that e.g. you […]

How many people use Ubuntu?

Discover the range of industries, people and services that are using Ubuntu right now. Netflix. Snapchat. Dropbox. Uber. Tesla…and the International space station – what do they all have in common? They run on Ubuntu. To celebrate our upcoming 16.04 LTS we wanted to shine a bit of light on how many people in the […]