The Wine development release 1.7.48 is now available.
What’s new in this release:
On July 27, Canonical, the company behind the world’s most popular free operating system, Ubuntu Linux, has announced on one of their Twitter accounts that they launched a new campaigned targeted towards movie directors.
Entitled “Scopes, enter your content Wonderland,” the new Ubuntu Phone campaign calls all filmmakers to create video ads for the company’s Ubuntu Touch mobile operating system, especially for Scopes.
“Create a video story that expresses the emotion, wond… (read more)
Robert Lange has announced the release of VectorLinux 7.1 “Light” edition, a Slackware-based distribution featuring the lightweight IceWM window manager: “Vector Linux Light 7.1 is released and available for download. The ‘lightness’ of this edition is relative to our Standard edition, and is achieved by using the lightweight….
On July 27, Canonical’s Łukasz Zemczak sent in his daily report informing us all about the work done by the Ubuntu Touch developers in the last couple of days, as well as to apologize for a regression introduced by the Ubuntu Touch OTA-5 update.
On July 27, ARNU Box (formerly Armada) had the great pleasure of informing Softpedia about the immediate availability for purchase of two new Pure Linux set-top box devices powered by the recently released Kodi 15.0 “Isengard” media center software (formerly XMBC Media Center).
Two new set-top boxes powered by the… (read more)
On July 27, the Google Chrome developers, through Alex Mineer, were excited to announce the promotion of the Google Chrome 45 web browser to the Beta channel for all supported computer operating systems, including Linux, Mac OS X, and Microsoft Windows.
Caolán McNamara, a renowned Red Hat engineer, has recently published some interesting details, claiming that he managed to get the well-known LibreOffice open-source office suite to run on the next-generation Wayland display server.
Electric Sheep Fencing LLC., through Chris Buechler, has announced the immediate availability for download of the fourth maintenance release of the pfSense 2.2 FreeBSD-based firewall software.
According to the release notes, pfSense 2.2.4 is an important release that patches multiple stored XSS vulnerabilities in the software’s web-based interface, fixes various issues with the tcp package, most… (read more)
The developers of the famous open-source Docker Linux container engine have recently announced that the first RC (Release Candidate) version of the anticipated Docker 1.8 app is now available for download and testing.
According to the comprehensive changelog, attached at the end of the article for reference, Docker 1.8.0 RC1 brings with it a vast array of new features and fixes many of the annoying bugs re… (read more)
On July 27, the developers of the famous Git open-source version control system were more than proud to announce the immediate availability for download of version 2.5.0 of Git.
According to the release notes, Git 2.5.0 is a major feature release that includes countless improvements to various areas of the software but also fixes some of the most annoying bugs reported by users since the previ… (read more)
I wrote an interesting editorial a while ago related to the Mr. Robot TV show that runs on the USA Network channel every Wednesday, starring Rami Malek as a computer hacker who goes by the name of Elliot.
At the point in time, I was very impressed with the level of information offered on the show about various open-s… (read more)
On July 27, Arne Exton, the creator of several Linux kernel-based operating systems, had the pleasure of informing us about the immediate availability for download of a new build of his Exton|OS Light distribution.
Being based on Canonical’s Ubuntu 15.04 (Vivid Vervet) operating system, Exton|OS Light Bui… (read more)
Last night the 2015 recipients were announced:
BoFs, meetings and workshops
Following the weekend of talks inspiring and informing the community, the focus shifted today to working on details.
We had some issues with sound at today’s wrap-up session location,
so the wrap-up will be moving to a better place tomorrow.
About Akademy 2015, A Coruña, Spain
For most of the year, KDE—one of the largest free and open software communities in the world—works online by email, IRC, forums and mailing lists. Akademy provides all KDE contributors the opportunity to meet in person to foster social bonds, work on concrete technology issues, consider new ideas, and reinforce the innovative, dynamic culture of KDE. Akademy brings together artists, designers, developers, translators, users, writers, sponsors and many other types of KDE contributors to celebrate the achievements of the past year and help determine the direction for the next year. Hands-on sessions offer the opportunity for intense work bringing those plans to reality. The KDE Community welcomes companies building on KDE technology, and those that are looking for opportunities.
For more information, please contact The Akademy Team.
We’ve submitted several talks to the OpenStack Summit in Tokyo in Vancouver. We’ve listed them all below with links to where to vote for each talk so if you think they are interesting – please vote for them!
Creating business value with cross cloud infrastructure
Speaker: Mark Shuttleworth
Building an OpenStack cloud is becoming easy. Delivering value to a business with cloud services in production, at scale to an enterprise class SLA needs knowledge and experience. Mark Shuttleworth will discuss how Ubuntu, as the leading linux for cloud computing is taking knowledge and experience from OpenStack, AWS, Azure and Google to build technologies that deliver robust, integrated cloud services.
Supporting workload diversity in OpenStack
Speaker: Mark Baker
It is the workloads that matter. Building cloud infrastructure maybe interesting for many but the value it delivers is derived from the business applications that run in it. There are potentially a huge range of business applications that might run in OpenStack: some cloud native, many monolithic; some on Windows, others in RHEL and increasing numbers in Ubuntu and CentOS. Diversity can create complexity. Issues of support, maintenance, backup, upgrade, monitoring, alerting, licensing, compliance and audit become amplified the more diverse the workloads are. Yet successful clouds should consume most workloads so the problems need to be understood and addressed. This talk will look at some of thecurrent challenges support workload diversity in OpenStack today, how end users are managing them and how they can be addressed by community in the future.
Building an agile business for Asia market with OpenStack
Speaker: Mark Baker (Canonical), Yih Leong Sun, Dave Pitzely (Comcast)
In a previous talk at Vancouver summit, “Enabling Business Agility with OpenStack”, we shared a few strategies of integrating OpenStack into an organisation including selecting enterprise workload, gaining developer acceptance, etc. This talk extends the previous one by focusing on the business aspect in the Asia market. The audience will learn how to take advantage of the growing OpenStack community in order to create a business case that meet regional market requirements and understand what are the challenges in evaluating and deploying OpenStack. This presentation is brought to you by the OpenStack Enterprise Working Group.
Sizing Your OpenStack Cloud
Speakers: Arturo Suarez & Victor Estival
Sizing your OpenStack environment is not an easy task. In this session we will cover how to size your controller nodes to host all your OpenStack services as well as Ceph nodes and Overcommit CPU and Memory based on real use cases and experiences. VMs? Containers? Baremetal? How do I scale? We will cover diferent approaches to do the sizing and we will also talk about the most bottlenecks that you might find while deploying an OpenStack cloud.
Deploying OpenStack in Multi-arch environments
Speakers: Victor Estival & Ryan Beisner
On this session we will talk and demonstrate how to deploy a OpenStack environment on a IBM Power 8 server and ARM processors. We will talk about the differences between the deployment over the different architectures (Intel, Power and ARM) discuss about multi-arch deployments and advantages.
High performance servers without the event loop
Speakers: David Cheney
Conventional wisdom suggests that high performance servers require native threads, or more recently, event loops.
Neither solution is without its downside. Threads carry a high overhead in terms of scheduling cost and memory footprint. Event loops ameliorate those costs, but introduce their own requirements for a complex callback driven style.
A common refrain when talking about Go is it’s a language that works well on the server; static binaries, powerful concurrency, and high performance.
This talk focuses on the last two items, how the language and the runtime transparently let Go programmers write highly scalable network servers without having to worry about thread management or blocking I/O.
The goal of this talk is to introduce the following features of the language and the runtime:
These four features work in concert to build an argument for the suitability of Go as a language for writing high performance servers.
Speakers: Chen Liang & Hua Zhang
Container is making a lightweight alternative to hypervisors. There are lots of great work have been done to simplify container-to-container communication like neutron, fan, kubernetes, SocketPlane, dragonflow etc. What are the main characteristics of the container technology? What kind of design challenges those characteristics bring us? What are the main technical differences between those great container networking solutions? All of these are main topics in this session. Beyond that, we will also talk about some of our rough thought to make networking best for the container.
IT departments of companies of any size and any industry have been loosing workloads to Amazon Web Services and the likes in the last decade. OpenStack is the vehicle to compete with the public clouds for workloads, but there are several factors to considering when building and operating it in order to stay competitive and win. In this session we will walk through the some of the factors (cost, automation, etc) that made AWS and other public clouds successful and how to apply them to your OpenStack cloud. And then we will focus on our competitive edge, on what we should do to win.
Unlocking OpenStack for Service Providers
Speakers: Arturo Suarez, Omar Lara
Is OpenStack commercially viable for service providers?
Copy & Paste Your OpenStack Cloud Topology
Speakers: Ryan Beisner
A discussion and live demonstration, covering the use of service modeling techniques to inspect an existing OpenStack deployment and re-deploy a new cloud with the same service, network, storage and machine topology. This talk aims to demonstrate that modeling and reproducing a cloud can help an organization test specific scenarios, stage production upgrades, train staff and develop more rapidly and more consistently.
The Reproducible, Scalable, Observable, Reliable, Usable, Testable, Manageable Cloud
Speakers: Ryan Beisner
This talk discusses a proven and open-source approach to describing each of your OpenStack deployments using a simple service modeling language to make it manageable, discoverable, repeatable, observable, testable, redeployable and ultimately more reliable.
A live demonstration and walk-through will illustrate a simple method to consistently define, describe, share or re-use your cloud’s attributes, component configurations, package versions, network topologies, api service placements, storage placement, machine placement and more.
As many more people starting to use OvS, there are a lot of great work have been done to integrate with OpenStack, and many new areas are in the process of implementing and exploring, here we would like to talk the experience to make OvS works best on Ubuntu.
1. Performance tuning on OvS
OpenStack as proven and open-souced Hypver-Converged OpenStack
Speakers: Takenori Matsumoto (Canonical), Ikuo Kumagai (BitIsle), Yuki Kitajima (Mellanox)
Many Datacenter providers are looking for proven and open-source hyper converged OpenStack solution so that they can provide simple, low-cost, high-performance and rapid-deployment OpenStack environment. To archive this goal, the followings topics become key considerations.
To address this challenges, in this session we will share the best practices and lesson learns about the idea to use OpenStack as hyper converged infrastructure with high standard technologies.
The agenda to be covered in this session are below.
Testing Beyond the Gate – Openstack Tailgate
Speakers: Malini Kamalambal (Rackspace), Steve Heyman (Rackspace), Ryan Beisner (Canonical.com), James Scollard (Cisco), Gema Gomez-Solano (Canonical), Jason Bishop (HDS)
This talk aims to discuss the objectives, motivations and actions of the new #openstack-tailgate group. Initially comprised of QA, CI and system integrator staff from multiple organizations, this effort formed during the Liberty summit to take testing to the next level: functional validation of a production OpenStack cloud. The tailgate group intends to focus on enhancing existing test coverage, utilizing and improving existing frameworks, converging the testing practices across organizations into a community based effort, and potentially spinning off new useful tools or projects in alignment with this mission.
We are not starting with predetermined tools, we are starting with the problem of testing a production openstack cloud composed of projects from the big tent and figuring out how to effectively test it.
Testing Beyond the Gate – Validating Production Clouds
Speakers: Malini Kamalambal (Rackspace), Steve Heyman (Rackspace), Ryan Beisner (Canonical.com), James Scollard (Cisco), Gema Gomez-Solano (Canonical), Jason Bishop (HDS)
Join QA and operations staff from Rackspace, Canonical, Cisco Systems, Hitachi Data Systems, DreamHost and other companies as they discuss and share their specific OpenStack cloud validation approaches. This talk aims to cover a wide range of toolsets, testing approaches and validation methodologies as they relate to production systems and functional test environments. Attendees can expect to gain a high-level understanding of how these organizations currently address common functional cloud validation issues, and perhaps leverage that knowledge to improve their own processes.
High performance, super dense system containers with OpenStack Nova and LXD
Speakers: James Page
LXD is the container-based hypervisor lead by Canonical, providing management of full system containers on Linux based operating systems.
Combined with OpenStack, LXD presents a compelling proposition to managing hyper-dense container based workloads via a Cloud API without the associated overheads of running full KVM based virtual machines.
This talk aims to cover the current status of LXD and its integration with OpenStack via the Nova Compute LXD driver, the roadmap for both projects as we move towards the next LTS release (16.04) of Ubuntu, and a full demonstration of deployment and benchmarking of a workload ontop of a LXD based OpenStack cloud.
Attendees can expect to learn about the differences between system and application based containers, how the LXD driver integrates containers into OpenStack Nova and Neutron, and how to effectively deploy workloads ontop of system container based clouds.
Why Top of Rack Whitebox Switches Is The Best Place To Run NFV Applications
Speakers: Scott Boynton
Since network switches began they have been created with a common design criteria that packets should stay in the switching ASIC and never be sent to the CPU. Due to the latency involved such events should happen only by exception and avoided at all cost. As a result switches have small CPUs with only enough memory to hold two images and small channels between the CPU and the switching ASIC. This works well if the switch is only meant to be a switch. However, in the world of the Open Compute Project and whitebox switches a switch can be so much more. Switches are no longer closed systems where you can only see the command line of the network operating system and just perform switching. Whitebox switches are produced by marrying common server components with high powered switching ASICs, loading a Linux OS, and running a network operating system (NOS) as an application. The user has the ability to not only choose hardware from multiple providers, they can chose the Linux distribution, and the NOS that best matches their environment. Commands can be issued from the Linux prompt or the NOS prompt and most importantly, other applications can be installed along side the NOS.
This new switch design opens up the ability to architect data center networks with higher scale and more efficient utilization of existing resources. Putting NFV applications on a top of rack switch allows direct access to servers in the rack keeping the traffic local and with lower latency. Functions like load balancing, firewalls, virtual switching, SDN agents, and even disaster recovery images are more efficient with smaller zones to manage and in rack communications to the servers they are managing. The idea of putting NFV applications on large powerful centralized servers to scale is replaced with a distributed model using untapped resources in the switches. As a server, the switch is capable of collecting analytics, managing server health, or even acting as a disaster recovery platform. Being able to recover images from a drive in the switch instead of from a storage array across the network will not only be faster but lower the expensive bandwidth required to meet recovery times.
Whitebox switch manufacturers are already delivering switches with more powerful CPUs, memory, and SSD drives. They have considerably more bandwidth between the CPU and the switching ASIC so applications running in secure containers on the Linux OS can process packets the same way they would on a separate server. With expanded APIs in the NOS, many of the functions can be performed directly between applications without even touching the PCI or XFI bus for even more performance.
A new design for how to distribute NFV applications is here today with an opportunity to stop wasting money on task specific devices in the network. With an NFV optimized whitebox switch, top of rack functionality is taken to new heights.
Deploying Openstack from Source to Scalable Multi-Node Environments
Speakers: Corey Bryant
OpenStack is a complex system with many moving parts. DevStack has provided a solid foundation for developers and CI to test OpenStack deployments from source, and has been an essential part of the gating process since OpenStack’s inception.
Containers for the Hosting Industry
Speakers: Omar Lara, Arturo Suarez
Currently the market for the sale of virtual machines in cloud is reaching a break-even point and with the introduction of the containers we will show how it is more profitable for the hosting industry to take advantage of high-density schemes that allow us to generate economies of scale more sexy in this sector.
We will discuss about deployment LXC/LXD capabilities for this industry and how to generate value by OpenStack and LXD Lightervisor to face the new challenges in terms of provisioning services like messaging / communication and collaboration / storage and web hosting workloads.
Deploying tip of master in production on a public cloud
Speakers: Marco Ceppi
Over the past six months I’ve been deploying and managing a public, production grade, OpenStack deployment from the lastest commits on the master branch for each component. This was not an easy process, as if deploying the lastest development release, for a production cloud no less, wasn’t challenge enough; The entire effort to do so has been done by me alone. In this session I’ll dive into the decisions I made when designing my OpenStack cloud, pitfalls I encountered into when performing my first deployments, and lessons learned during design, execution, maintenance, and upgrades of my OpenStack cloud.
Building a CI/CD pipeline for your OpenStack cloud
Speakers: Marco Ceppi, Corey Bryant
Maintaining and upgrading an OpenStack deployment is a time consuming and potentially perilous adventure. For years software developers have been using continuous integration and continuous delivery tools to validate, test, and deploy their code. By using those same ideaologies, your OpenStack too can benefit from that process. In this session we’ll go over some ways this can be achieved for an OpenStack deployment which are applicable to the smallest of OpenStack deployments to full scale OpenStacks. We’ll discuss how existing techniques and tools can be leveraged to produce a stable OpenStack deployment that is updated regularly based on a continuous testing cycle.
Life on the edge – deploying tip of master in production for a public cloud
Speakers: Marco Ceppi
Take a walk on the other side. The goal of this cloud was simple: could you deploy the latest master development of OpenStack? Could you deploy that same setup in a production system? Now how about as a public cloud. In this talk I walk through my process to achieve this, from the first deployment, to testing and validation of a staging version, to continuously updating production with the lastest versions. This was a multi-month journey to get right and leads to some interesting findings in how to maintain and grow an OpenStack deployment over time.
In this session I’ll walk through my thought process in picking components and tools, issues I ran into, lessons I’ve learned, and discuss the future of a cloud deployed from the bleeding edge.
Your OpenStack cloud has been deployed on a Long Term Service release such as Ubuntu 14.04 using the Icehouse release. Now that Icehouse is out of support from core OpenStack developers and moved into the hands of the Distros its time to start planning the move from Icehouse to Kilo, Juno, Liberty or beyond. Upgrading a live cloud is no trivial task and this talk aims to walk you through the dos, don’ts, and how tos for planning and implementing your cloud upgrade.
Charm your DevOps and Build Cloud-Native apps using Juju
Speakers: Ramnath Sai Sagar (Mellanox), Brian Fromme (Canonical)
DevOps are interested in building applications that are scalable, portable, resilient, and easily updatable. Oftentimes, one looks to adopt the cloud to achieve this. However, it is not as easy as simply lifting and shifting your application to the cloud or splitting your application into smaller containers or VMs. The key to cloud-native application is to be micro, requiring a complete rewrite of the application as microservices from scratch. Alternatively, one could leverage an existing microservice that is efficient, scalable and easily deployable. Juju is the next-generation open-source universal modelling framework allowing developers to reuse existing microservices using Charms and to configure them using simple relations. Join us in this presentation to see how Juju allows cloud-native tools, such as Docker, to be deployed and how Mellanox’s Efficient Cloud Interconnect helps these applications to get highest performance with Juju.
Have container. Need network? Ubuntu LXD+MidoNet
Speakers: Antoni Segura Puimedon (Midokura.com), Mark Wenning (Canonical)
The LXD hypervisor from Canonical represents a pure-container approach to virtual machines. In our session, we describe how LXD connects with open-source MidoNet to deliver a distributed and highly available software-defined network for containers. The combination brings Ubuntu OpenStack container usage to the next level in networking performance and resilience.
lxc move c1 host2:. In 18 characters, you can live migrate containers between hosts. LXD makes using this powerful and complex technology very simple, and very fast. In this talk, I’ll give a short history of the underlying migration technology CRIU, describe a few optimizations that LXD is doing in the space.
Although this talk will include some specific examples and high level strategy for LXD it will be useful for anyone who is interested in general container migration via CRIU. In particular, I will discuss limitations and common pitfalls, so that interested users can gauge the project’s usefulness in their applications. to make things fast, and discuss future areas of work both in CRIU and in LXD to support a larger class of applications and make things even faster.
In this talk I’ll cover the experience operators and users will have when using LXD as their Nova-compute driver. For operators this includes access to much better density and potentially significant cost savings for particular workloads. Users will see more rapid provisioning and access to more capacity but may experience some limitations compared to KVM. I will examine these limitations and discuss how nova-compute-lxd works around them today, as well as discuss what the kernel community is doing to lift these restrictions in the future. Finally, I’ll give a few examples of workloads that will benefit from LXD’s performance and density advantages.
The Meizu MX4 Ubuntu Edition is now available for purchase, and anyone can get one from the official website, and the latest update for the Ubuntu Touch that’s powering it has brought some serious improvements to it.
Ubuntu for phones is constantly being improved on, and major patches are published all the time. Each new update brings new functionality, new features, improvements, and too many fixes to count. It’s also true that the OS is still young, and there is a lot of … (read more)
Chris Buechler has announced the release of pfSense 2.2.4, a FreeBSD-based firewall solution. The new release mostly includes bug fixes and security updates. The bug fixes include patches to prevent cross-site scripting attacks against the web interface, a fix for a TCP resource exhaustion attack and enhancements….
Manjaro Linux 0.8.13 has received a fresh update pack and the developers have upgraded some of the supported Linux kernels, a number of important packages have been updated, and some important fixes have been implemented.
Manjaro is a Linux distribution based on Arch Linux, but it’s not following the same release model. Instead of being a rolling release distro that gets upgraded all the time, the Manjaro devs have decided to make intermediary releases and publish big patch… (read more)
On July 27, the Debian Project, through Joerg Jaspert, announced the effective removal of support for the SPARC hardware architecture from the Debian GNU/Linux operating system.
This week in DistroWatch Weekly: Review: Following Debian’s GNU/Hurd in 2015 News: Ubuntu MATE tests new Welcome program, using Telegram on Fedora, Plasma for phones and Linux running on supercomputers Book review: Linux Bible (Ninth Edition) Torrent corner: BackBox, Calculate, Neptune Released last week: Red Hat Enterprise Linux….
The Manjaro development team, through Philip Müller, has posted an interesting article on the project’s website informing the entire Linux community that they need your help to contribute to the Arch Linux-based distribution.