Mapping and Calculating Areas Autonomously: An AI Robotics Programming R&D Project

Recently, our ‘AI Robotics Programming’ Team has delivered a proof-of-concept robot used to explore and map unknown industrial and residential spaces while identifying floor types.

The robot (with the exception of its Roomba chassis) is fully built from open-source (OS) hardware and software components. The result is an elegant blend of genetic algorithms and neural networks which enable the robot to autonomously map, survey, and calculate areas of the non-trivial indoor/outdoor spaces.

As an industrial use-case, the team has showcased a demo where the robot is used by a fictional cleaning company to optimize its costs. The robot is sent to a fictional warehouse where it explores the space after business-hours and sends back a report to the cleaning company with the space’s map, areas of its various sections and floor types. This report helps the company to efficiently invoice the client for a regular maintenance contract and remove the human factor from part of its client acquisition process, thus optimizing costs.

The project is still in its early stages and we are confident that many more use-cases will be discovered along the development process – for example by adding support for outdoor spaces.

While the novelty lies mainly in the software part, our AI team has amusingly dubbed their assembled robot, SMOB – short for Self-Mapping Open Bot.

Mapping, Surveying, and Calculating Areas Autonomously: An AI Robotics Programing R&D Project
SMOB in action, surveying and mapping the area of Savoir-faire Linux HQ

What Powers SMOB?

The project is a unique blend of hardware and software design. Since the robot needed to be resistant to collisions with objects and travel at a fair speed, using the entire TurtleBot kit was not an option under such constraints. The team has solved the problem by mating the Roomba Create base (which is well suited for heavy duty operations) with the IMU (Inertial Measurement Unit), LIDAR (Light Detection And Ranging) and Raspberry Pi3 from the TurtleBot 3 DIY kit. In order to achieve the latencies required for smooth operations, the team has also modified the Debian kernel to support the 64-bit instruction set available on the ARMv9 processor (the modifications were based on the PI64 project).

SMOB’s Algorithms

SMOB combines two different techniques to achieve its goals. The space discovery algorithm is an implementation of the genetic algorithm proposed by Mohamed Amine Yakoubi and Mohamed Tayeb Larski. The algorithm is realized in Python to work in conjecture with the ROS framework, which powers the robot’s movement.

The more complex part has been the realization of the convolutional neural network which is responsible for classifying floor types. The main challenge here is to properly identify the properties of an image (sharpness, luminosity, size, camera position, etc.) so the network can achieve maximal classification efficiency – a research that took the team substantial trial and error before getting the right parameters.

Test Results

Our experiments shows a performance of coverage of ~1m^2 per 1 minute on a low-cost robot (~600 USD).

What Makes SMOB a Unique Robot?

SMOB is unique because it is among the first-attempts to have a robot perform several complex tasks on embedded hardware. It can be safely argued that automated driving already uses algorithms that are far ahead of what SMOB can do, but they are also far more complex than SMOB’s very basic and low-processing cost solutions. SMOB is about minimalism it’s an attempt to create simple solutions to problems which are currently solved by much more power and memory hungry algorithms.

The Future of the Self-Mapping Open Bot in the Congested Robotics Field

One of the biggest challenges faced by our AI Robotics Programming Team includes outdoor operations related to LIDAR sensors and issues associated with them. LIDAR – a device that maps objects in 3-D by bouncing laser beams off its real-world surroundings – is one of the most crucial and contentious components of any vehicle (in this case SMOB) that is supposed to drive itself around (Read for example: Simonite, 2017). SMOB’s LIDAR is unable to identify glass. It also faces difficulties in loop-closure detection and navigation on complex surfaces. For example, a SMOB designed to detect potholes outdoors would most likely need a different chassis, drive-train, and would need to be able to detect potholes before reaching them; otherwise it may not be able to get out of them once trapped inside.

Stay tuned for more details on the evolution of SMOB and its added capabilities.

Simonite, T. (2017, March 20). Self-Driving Cars’ Spinning-Laser Problem. MIT Technology Review.

How to Create a Light and Fast-Loading Theme with Liferay?

When It Comes to Websites, Page Speed Matters!

This article is motivated by our website project accomplished by our Integration Platforms and AI Department using Liferay 7 (the latest version of Liferay Portal) for one of our clients– a large Canadian corporation in telecommunications and media industry. Alexis, our front-end developer, shares with you his first-hand experience of this project with the overarching objective to demonstrate how to create a light and fast-loading theme using Liferay portal? 

Right off the bat, we need to acknowledge the fact that, “The average online shopper expects your pages to load in two seconds or less…; after three seconds, up to 40% will abandon your site” (Gomez Inc., 2010). Thus,”page speed is increasingly important for websites” (see Lara Callender Hogan).

Thus, “page speed is increasingly important for websites.”

However, Liferay portal technology itself is not a one-size-fits-all solution. For example, in some cases, if the integrator does not develop a context-specific front-end libraries, the website will be crippled by unnecessary delays. Some experts have also suggested a mainstream recommended method which is to extend the Liferay theme by default. This, however, would result in a technical failure thereby extended web page loading time. Thus, based on the lessons learned from our past projects, i.e. experiential learning, and lucky to have the latest Liferay portal, Liferay 7, we decided to tweak the technology, develop a new module, and adapt it to unique expectations of our client. In what follows we share with you our feedback and the positive test results.

Liferay Portal: A Powerful, Yet Delicate Tool

Liferay is an Open Source Content Management System (CMS) and Portal, written in Java. Therefore, a service provider like us has flexibility to customize it as much as required to fully adapt it to the client’s needs. It provides the developers with an extensive user management, access rights and documents. However, there are some theme specific and performance issues to consider before starting a new website project. Here is a list:

  • Generally, a Liferay theme is home to a bulk of all the front-end libraries and styles needed by Liferay and its portlets. However, a big chunk of the codes and dependencies is useless for a guest visitor. In case of our client, these kinds of visitors form the majority of their users and indubitably the unnecessary code overload will cause performance issues and slow the site down.
  • Liferay imposes the presence of its Design Language, Lexicon (or rather, its only implementation: Clay). This may also cause several problems:
    • Mismatch between Design Language and the client’s design requirements can limit the project’s design feasibility.
    • This situation causes conflicts between CSS rules, therefore increasing the development time.
    • Liferay’s template bugfixes and updates can break the developed styles causing a an increase in maintenance time.
    • With content administration being on the pages, the developed styles can break the behavior of the administration styles also contributing to the increase of the development and maintenance time.

Although these issues impose challenges on a Liferay deployment project, we can offset their impact by developing front-end libraries, as you can read in what follows.

The Solution for Creating a Light and Fast-Loading Theme with Liferay

The solution to benefit from the state-of-the-art Liferay Portal while keeping in check the loading speed is to create a light theme without any Liferay front-end libraries such as senna.js, AlloyUi, Lexicon/Clay, etc.

However, in Liferay 7 this objective cannot be obtained as easy as it looks mainly due to not having the possibility to have an administration view of the site. In order to avoid this issue, we have internally developed a stand alone liferay module which is called “Theme Switcher“. This added functionality allows us to dictate the portal to display a theme according to the role of the user. Here is the link to the github repo: savoirfairelinux/liferay-theme-switcher

5 Steps Towards Setting Up a Light Theme

Here are the five steps for creating a light theme using Liferay 7. But before we get started, let’s look at this instance of a Liferay’s light theme which can be found on GitHub: savoirfairelinux/lightweight-liferay-theme. This theme is created using Maven, and if you wish to modify it you actually can. It is all open source.

Step 1 – Clean Up Your Themes

Remove the inclusions from the portal_normal.ftl file, mainly: <@liferay_util [" include "] page = top_head_include />. But also <@liferay_util [" include "] page = body_top_include /> and so on.

Step 2 – Repair the Themes

Unfortunately, the top_head_include variable does not only contain Liferay’s CSS and javascript libraries, but also many other things. It is therefore necessary to re-include the tags of the head that interest us. For some examples of the tags, consider the following:

  • The meta-tags that the user can define in each pages (keywords, description, etc.):

<@liferay_theme["meta-tags"] />

  • The definition of canonical URLs:
<#if !is_signed_in && layout.isPublicLayout() >
  <#assign completeURL = portalUtil.getCurrentCompleteURL(request) >
  <#assign canonicalURL = portalUtil.getCanonicalURL(completeURL, themeDisplay, layout) >
  <link href="${htmlUtil.escapeAttribute(canonicalURL)}" rel="canonical" />
  <#assign availableLocales = languageUtil.getAvailableLocales(themeDisplay.getSiteGroupId()) >
  <#if availableLocales?size gt 1 >
    <#list availableLocales as availableLocale>
      <#if availableLocale == localeUtil.getDefault() >
        <link href="${htmlUtil.escapeAttribute(canonicalURL) }" hreflang="x-default" rel="alternate" />
      <link href="${htmlUtil.escapeAttribute(portalUtil.getAlternateURL(canonicalURL, themeDisplay, availableLocale, layout)) }" hreflang="${localeUtil.toW3cLanguageId(availableLocale)}" rel="alternate" />

In order to know what to look for, and how, you need to know the basics of jsp which is used to generate the variable top_head_include. For more information, please consult the following link.

Step 3 – Organize the Themes

One of the challenges is that liferay-theme-tasks (link) which is a gulpfile provided by Liferay may not function smoothly. The good news is that you can organize the themes the way you want; i.e. according to your team’s development standards (sass, less, es6, etc.). In case of this client, we have used and embellished the usual gulpfile to our web projects. All headed by Maven and the front-end plugins.

Step 4 – Automate the Deployment of the Themes

Step 4 is perhaps the longest one, but as soon as it is completed you have a turnkey solution at hand. To quickly deploy the light theme files you have developed, take this advice. You do not have to compile and wait for the deployment of the theme for each modification of CSS or JavaScript (the features normally present in liferay-theme-tasks). In the OSGI context of Liferay 7, it is not possible to simply access the theme files (CSS, js, templates, etc.). Thus, you have to do the trick which you can learn by looking at the gulp `deployLocal:syncTheme` task.

The main idea is to transform the deploy module in webbundledir mode instead webbundle.

Step 5 – Enjoy the Developed and Deployed Themes

You now have a Liferay theme, which contains only what you want for your project, and which is deployable like any other Liferay themes. So, it is time to just enjoy the fruit of your efforts.

Results and Robustness Checks

In order to evaluate the performance of the theme(s) created, you can conduct three tests on each page (i.e. instance) with the same content. We have tried these tests, running each one of them on a Liferay server configured for production. Below are the three instances for which we used these tests (on a serveur with 8 CPU and 28Go of RAM, with a SSD disk).

3 Instances Tested for Robustness

  1. A Liferay vanilla theme, the “Classic” theme
  2. The light theme we developed for the client’s web page project
  3. A possible “real” case scenario: the “Classic” theme with more styles and JavaScript of the theme that we have developed for the client.

To perform these 3 tests, we used a local instance of (6.0.3), with Chrome 62. For each test, 10 calls were made, below is the summary of the results.

Results of the Test 1: The Classic Liferay Theme

  • 52 requests
  • 1064.77 kb
  • DOMContentLoaded: 1.24s (±31.95ms)
  • Load: 2.06s** (±52.89ms)
  • SpeedIndex: 1349 (±34.99)

We deemed these results acceptable as we could not find a single glitch in the theme as far as page performance was considered. The results are significant as they pass beyond the threshold for requests. As a rule of thumb, if the number of requests go beyond 40, it is considered a large number of requests.

Results of the Test 2: The Light Theme

  • 37 requests
  • 800.38 kb
  • DOMContentLoaded: 588ms (±42.96ms)
  • Load: 1.17s** (±55.17ms)
  • SpeedIndex: 863 (±39.57)

Fewer queries were made as it was a smaller page. Thus, this custom theme is more powerful than a basic Liferay theme!

Results of the Test 3: The Classic Theme with JavaScript and CSS of the Client’s Light Theme

  • 56 requests
  • 1214.59 kb
  • DOMContentLoaded: 2.22s (±457.34ms)
  • Load: 2.97s** (±458.11ms)
  • SpeedIndex: 2475 (±456.71)


Creating a light theme with Liferay is advantageous as it empowers you to maintain control over your company’s image by managing all the items that are sent to the browser. Liferay technology as a tool coupled with our expertise and years of experience working with it offer your company a solution, a powerful website, to stay on top of your content management efforts signaling a positive image to clients.

Overall, this experience has been an instructive and informative one. Our client and us are genuinely happy for the results are satisfactory and in line with our expectations. It is indeed a powerful website worthy of bearing its brand name.


Celebrating the Next Generation of Business Digitization Talents


Savoir-faire Linux and ICTC celebrate the first cohorts of the successful graduates from the Small Business Digitization Initiative (SBDI).

The first students, the pioneers, started last January their six-month journey which can position them as the next generation of digitization talents. They went through an intense in-class training helping them understand how technology opens countless improvement and change opportunities. At the same time, they tackled a real business challenge in a natural work environment, thus balancing theory and practice, through a unique approach based on agile feedback.

Some students from the class of 2017

A Training to Be Seen as the Digitization Leader’s Toolkit

Students worked on projects as diverse as delivering fresh products to Nunavut, developing local retail shops promoting health, building cool brands, improving the customer experience in-store and online, and scaling the business of a micro-brewery. They touched a wide-range of technologies from beacons, communication and collaboration technologies, business applications of all sorts from sales to accounting and customer relationship management. Some of more technical students managed to build an Android mobile app, deploying an open source internal social media, developing online stores with Shopify, WordPress or Odoo, and developing a software for a local insurance company to do underwriting online. We used Odoo, and more generally a wide range of open source technologies, which could be easily accessible and useful to students to tackle business challenges. We provided students with the necessary training and skill set to apply open source projects and combine them with other technologies while solving business related problems.

Theory Face-to-Face with the Reality

We would like to congratulate every single student for their efforts, for the challenges they bravely faced, for trying to apply their theoretical learning in real-life scenarios, regardless of the success rate!

During this 6-month long program, they confronted tough decision making situations and challenges with both courage and determination. These experiences, for some, led to new job opportunities. Some others realized that they needed more training and coaching. Some candidates evolved dramatically as a result of their flexibility and adaptations to the business context. In fact, they learned that they should accept to let go of some ambitions or had to adjust them to the real context.

One thing we are sure is that all candidates received a fair amount of constructive one-on-one training and feedback. They used these feedback loops as an instrumental tool to develop business solutions and make decisions based on what they have learned in class. After all, there may be more than one technological solution to resolve the same problem. But knowing that technology is just a tool, and not the main purpose, students learned to think globally and critically of all the stakeholders being affected by every decision. Such pluralistic approach to realistic problem solving opened a range of possibilities which demanded even a more engaged and considerate decision making process. We did our best to provide all necessary tools to the best of our knowledge and capability and hopefully we left a positive and lasting mark on them.

Savoir-faire Linux’s Role in the Program

Savoir-faire Linux is a technology and innovation company. We have been involved in this program as a business as well as technology training partner of ICTC. Furthermore, we recommended a module about sustainability to be added to the curriculum, and ICTC (our client-partner) accepted our proposal. As an Open Source Software advocate and an environmentally committed company, sustainability lies at the heart of our innovation strategy.

On the technology part, we mixed and match some general culture about IT, with hands-on exercises and feedback from our past experience pool. On the business side, we decided to broach a wide range of business processes (like a mini-MBA program), and showed the students how an ERP (in our case, the Odoo solution) was an appropriate technological solution to a functional problem. The two days of sustainability were offered as an opening subject, giving students the opportunity to imagine how the future would constrain us and change our whole way of designing, optimizing, sourcing and more globally consuming. This part was well received and gave more sens to innovation and change topics. We were really happy of how well it was received by the participants.

SBDI Participants in Action

The Future of the SBDI

We are thinking about the future of the Small Business Digitization Initiative (SBDI), as we are gathering feedback from students and employers. As this initiative was never done before, we have many ideas to adapt and improve the program. Our main challenge is to strike a more consistent balance between the theory and practice based on considerations of the received feedback and the stakeholders’ rights as well as their expectations. We have developed some answers and we will realize this goal in the next program.

To conclude, we extend our gratitude to all the team members. In fact, 11 people; namely, Marc, Mickael, Carol, Daniel, Monica, Adriana, Julien, Guillaume, Quentin, Bruno and Pascal have been passionately involved in the curriculum design and preparation as well as teaching the sessions.  They are happy and content about their contribution, experience and more important of all about the invaluable lessons learned from their engagement. Today, we are confident that we can make a significant difference in the Canadian youth’ future career development and therefore look forward to other opportunities to participate in a new project of this kind.

To obtain more information about the program or get involved, please contact our Director of Operations/COO: Mickael Brard through email: or telephone: +1 514 276 5468 (#357)

More informations

Three Chief Pillars of Our Success in Liferay: Experience, Expertise, and Community Embedment


Since 2010, we have been training, coaching and accompanying a range of enterprises and organizations in their digital transformation based on Liferay Portal solutions. Having received a number of awards and observed the rising success of our clients, today, we have unreserved confidence in what we can offer and how we can actually transform an organization through Liferay technology. In retrospect, we believe our successful performance rests upon three chief pillars (Figure 1):

  1. Our longtime experience accumulated through working on different projects,
  2. Our expertise rooted in our engineers’ know-how of Liferay, in particular, and open source technologies in general, and
  3. Our engineer-to-engineer relationship building capability which has placed us firmly embedded in Liferay expert community.
Figure 1. Successful Liferay Implementation

Experience and Expertise: Managing the Paradox of Catch-22 Situation

When it comes to deploying Liferay Portal in an organization’s hybrid information system, two main factors: experience and expertise, play a significant role. Experience is mainly built through having years of hands-on approach to deploying Liferay on different IS platforms in a myriad of service and manufacturing industries. Expertise, on the other hand, is a matter of accessing, creating, and accumulating abstract knowledge and know-how on the related technology fields. The paradox – the catch-22 situation – therefore is, the two feed on one another. Without expertise, a team cannot deliver on project promises and obtain an enriching experience, and without actually engaging in high impact projects one cannot gain solid expertise. Learning from mistakes, learning to tackle new problems, learning to understand new business models and forecasting needs, all translate into the indispensable interdependence between experience and expertise.

At Savoir-faire Linux, the team of enterprise solutions, which is responsible for Liferay-based projects, comprises 20 highly experienced open source software consultants half of them are Liferay certified. This means that they have successfully completed Liferay training programs and passed the certifying exams. Although Liferay expertise/experience is a necessary condition to successfully complete a Liferay project, it is not sufficient. Java experience/expertise is the complementary piece. For this critical reason, we have hired, trained, nurtured, and empowered a number of experienced Java developers who know about ‘ins and outs’ of Java; namely, technical specifications, philosophy, and ecosystem.

Liferay Community: A Place for Sharing Knowledge and Experience

Liferay community comprises the following: Liferay Projects, Community Projects, Community Feature Ideas, Releases, Forums, Blogs, Wiki, User Groups, Special Activities, Upcoming Events. Each of these elements is a window to countless opportunities for developers, users, researchers and enthusiasts to engage in a meaningful conversation leading to a helpful take-away such as experience, learning, know-how, or even social capital (See Picture 1). Wining five Community Excellence Awards has been a source of pride and gratitude for us since this is a testimony to our community embedment and collaborative software R&D.

Picture 1. An Illustration of Liferay Community Knowledge and Experience Exchanges

At Savoir-faire Linux, we make sure our Liferay experts (Click here to read an example) stay connected, deeply engaged, and collaborate on online open source platforms. In fact, it is one of our core beliefs that optimal software development process depends on open scientific knowledge sharing practices, as embodied in Open Source Initiative’s definition of OS and Debian Social Contract. We also believe, we have the moral obligation to hold ourselves accountable and give back to OS communities, the same way we receive from them. Another example would be the Liferay Montreal User Group which has been a successful initiative to bring developers, clients, and users together to exchange ideas and discuss future road maps.

Picture 2. Map and Logo of the Liferay Montreal User Group Source:

Experience, Expertise, and Community Embedment… So What?

Synergy or exponential knowledge growth is an invaluable gain in open source ecosystem. The real magic happens when you have a team of engineers who know what to do, who have done it a couple of times, and when faced with the unknown, they know how to figure it out and find the missing pieces and/or co-create them in collaboration with other gurus. In business world and organizational context, the synergy of the three pillars within open source software ecosystem leads to several benefits:

  1. business need is met through digital transformation,
  2. End users are satisfied because there is a value-added innovative service to enjoy,
  3. Development costs reduce while margins increase.

Not long ago, we have completed an award winning project called HuGo for Humania Assurance. HuGo Platform has won two prestigious national and provincial recognition: Digital Transformation Award, and SME Grand Prize OCTAS 2017 Lauréat (Picture 3).

Picture 3. Awards Won by HuGo Platform Using Liferay Portal Technology

These awards are tangible artifacts that showcase abstract elements underpinning a successful case of digital transformation. The dedication of top management of Humania Assurance, the expertise, experience and community efforts of Savoir-faire Linux, as well as the accumulated know-how on open source technologies over the past decades collectively create a synergistic and pronounced result. Bottom line is easy to read. The client’s brand name strengthens as an innovative SME in insurance industry. Their end users – the source of revenue generation – enjoy seamlessly a creative service/product which facilitates their insurance policy applications painstakingly, rapidly, and easily. Savoir-faire Linux gains yet another piece of experience, circulates the expertise in-house, and solidify its commitment to progressing Liferay community platform. (read the Case Study)

Shaping the Future Ensemble

We foresee three ways to better shape the future of Liferay based digital transformation. First, we invite enthusiasts to join our Liferay Montreal User Group and participate in the events. We want you to be heard, and to this end, we organize events and invite speakers like Raymond Augé, Senior Software Architect at Liferay (@rotty3000) to make sure you receive quality responses to your queries. Second, contact us and let us know about your specific expertise and experiences. We would love to have you on our team. Third, we are currently inviting new ideas to realize in collaboration with Liferay community, you may want to consider helping us push Liferay technological boundaries forward.

Savoir-faire Linux Announces Gold Sponsorship at Liferay Symposium North America 2017

Montreal, QC – (October 11, 2017) – Savoir-faire Linux – a Canadian leader in providing expertise on a range of open source technologies to enable digital transformation strategies – announces today its participation as a Gold Sponsor at this year’s Liferay Symposium North America, hosted by Liferay.  Liferay makes software that helps companies create digital experiences on web, mobile and connected devices.Liferay Symposium North America will take place from October 16 to 17 in Austin.

This premier event for digital business and technology leaders will include two days of customer case studies, expert sessions, hands-on workshops, networking opportunities, access to Liferay’s top executives and architects, as well as the keynotes from digital experience thought leaders.

Savoir-faire Linux is proud to be a Gold Sponsor at the Liferay Symposium North America”, said Christophe Villemer (Executive VP). “We look forward to meeting other Liferay enthusiasts and offering our expert knowledge and experience in insurance, and banking as well as other sectors such as public services, education, health, mechanical and industrial to the attendees.”

This digital business technology event will showcase the company’s experience, expertise and excellence in Liferay technology field and it is poised to unveil their expertise in other domains of open source software as well.

To obtain more information on our Liferay services, please directly contact Marat Gubaidullin (VP Integration Platforms & Artificial Intelligence) through email ( or by phone (+1 514 276 5468   ext. 162).

Developing An Ansible Role for Nexus Repository Manager v3.x


This article aims to explain how to automate the installation as well as configuration of Nexus’s Repository Manager version 3.x with Ansible

Ansible is a deployment tool, which enables playbooks to automate applications and infrastructure deployments. The key advantage is its flexibility to change applications with versatility as well as apprehending them as a service. However, Ansible has some weaknesses too. Following a voluntary simple design, it functions solely as an application of parameters without taking into account information availability and security concerns, which need to be handled by other systems. That is why developers will prefer to use Ansible in combination with a stateful agent system like Puppet, or a centralized management tool like Ansible Tower.

Automation with Ansible

Ansible focuses on the application-level setup, scripting a provisioning that can be run on top of any infrastructure-supporting tool (PaaS, containers, bare-metal, vagrant, etc.). It only needs an SSH connection and a sudo account to the remote system.

Provisioning scripts in Ansible are written in a declarative style using YAML files grouped as roles. The atomic instructions in those roles are expressed using a number of core modules provided by Ansible. Please have a look at the Ansible documentation for an in-depth introduction.

Re-provisioning and Updating Configuration

One of the DevOps’ models to handle configuration updates consists of provisioning a brand new environment from scratch and completely discarding the old one (think container images). This implies a reliable management of your data lifecycle. In our particular case of Nexus’s Repository Manager, this consists of several gigs of uploaded/proxied artifacts, some audit logs, and OrientDB blobs containing the configuration. Therefore, depending on one’s environment constraints, it can make sense to be able to update the configuration of an already-provisioned Nexus instance. The declarative nature of Ansible’s core instructions is inline with this purpose, but any custom logic written in a role should be idempotent, take the “create or maybe update” path into account.

One must also note that some parts of the Nexus configuration cannot be updated. Some examples include:

  • the settings related to BlobStores
  • the admin password if you ever loose the current one (update : or maybe through this way)

How to make Nexus’s Groovy API Fit Well with Ansible

The basic steps of the installation are pretty straightforward and can all be written using simple Ansible core modules:

  • download and unpack the archive
  • create a system user/group
  • create a systemd service

(these steps are in tasks/nexus_install.yml)

And then comes the surprise: Nexus configuration is not available in a simple text file format which can be edited with the help of simple Ansible instructions. It is stored in an embedded OrientDB database that must not be altered directly. The documented way to setup Nexus is either through its web user interface, or through its Integration API.

The way the Integration API works is as follows:

  1. Write a Groovy script that handles your configuration change;
  2. Upload it to Nexus with an HTTP PUT request, creating a REST resource for this script;
  3. Call the script through its HTTP GET/POST resource.

URI Module to the Rescue!

Ansible’s uri module makes HTTP requests, providing automation to all of this.

The first step is to upload the Groovy script on Nexus. Note that the script may already be there. Therefore, on re-runs of the playbook, we try to delete it before taking any action, just in case:

Through tasks/declare_script_each.yml, follow on:

  - name: Removing (potential) previously declared Groovy script {{</span> item }}
      url: "http://localhost:8081/service/siesta/rest/v1/script/{{ item }}"
      user: 'admin'
      password: "{{ current_nexus_admin_password }}"
      method: DELETE
      force_basic_auth: yes
      status_code: 204,404

  - name: Declaring Groovy script {{ item }}
      url: "http://localhost:8081/service/siesta/rest/v1/script"
      user: 'admin'
      password: "{{ current_nexus_admin_password }}"
      body_format: json
      method: POST
      force_basic_auth: yes
      status_code: 204
        name: "{{ item }}"
        type: 'groovy'
        content: "{{ lookup('template', 'groovy/' + item + '.groovy') }}"

The HTTP requests are executed from inside the target host, which is why localhost is used here. force_basic_auth: yes makes the HTTP client not wait for a 401 before providing credentials, as Nexus immediately replies with 403 when no credentials are passed. status_code is the expected HTTP status replied by Nexus. Since the Groovy script may not necessarily exist at that point, we must also accept the 404 status code.

The next step is to call the Groovy script that has been created through the previous HTTP call. Most of the scripts will take some parameters as input (e.g. create user <x>), and this is where Ansible and Groovy will help. Both coming from the ages of REST things, they can speak and understand JSON fluently.

On the Groovy script side :

import groovy.json.JsonSlurper
parsed_args = new JsonSlurper().parseText(args)

And to call this script from Ansible passing arguments:

  - include: call_script.yml
      script_name: setup_anonymous_access
      args: # this structure will be parsed by the groovy JsonSlurper above
        anonymous_access: true

with call_script.yml:

  - name: Calling Groovy script {{ script_name }}
      url: "http://localhost:8081/service/siesta/rest/v1/script/{{ script_name }}/run"
      user: 'admin'
      password: "{{ current_nexus_admin_password }}"
        Content-Type: "text/plain"
      method: POST
      status_code: 200,204
      force_basic_auth: yes
      body: "{{ args | to_json }}"

This allows us to cleanly pass structured parameters from Ansible to the Groovy scripts, keeping the objects’ structure, arrays and basic types.

Nexus Groovy Scripts Development Tips and Tricks

Here are some hints that can help a developer while working on the Groovy scripts.

Have a Classpath Setup in Your IDE

As described in the Nexus documentation, having Nexus scripting in your IDE’s classpath can really help you work. If you automate the Nexus setup as much as possible, you will inevitably stumble against some undocumented internal APIs. Additionally, some parts of the API do not have any source available (e.g. LDAP). In such cases, a decompiler can be useful.

Since our role on Github uses Maven with the all the necessary dependencies, you can simply open it with IntelliJ and edit the scripts in files/groovy.

Scripting API Entry Points

As documented, there are four implicit entry points to access Nexus internals from your script:

  • core
  • repository
  • blobStore
  • security

Those are useful for simple operations, but for anything more complicated you will need to resolve services more in-depth:

  • through indirection from the main entry points: blobStore.getBlobStoreManager()
  • directly by resolving an inner @Singleton from container context: container.lookup(DefaultCapabilityRegistry.class.getName())

Take Examples from Nexus’s Source Code

Some parts of Nexus (7.4%, according to Github) are also written using Groovy, containing lots of nice code examples: CoreApiImpl.groovy .

Creating HTTP requests from the configuration web interface (AJAX requests) also provides some hints about the expected data structures or parameters or values of some settings.

Last but not least, setting up a remote debugger from your IDE to a live Nexus instance can help, since there are lots of places where a very generic data structure is used (like Map<String, Object>) and only runtime inspection can quickly tell the actual needed types.

Detailed Examples

Here are some commented examples of Groovy scripts taken from the Ansible role

Setting up a Capability

Capabilities are features of Nexus that can be configured using a unified user interface. In our case, this covers:

  1. anonymous access
  2. base public URL
  3. branding (custom HTML header/footer).


    import groovy.json.JsonSlurper

    // unmarshall the parameters as JSON
    parsed_args = new JsonSlurper().parseText(args)

    // Type casts, JSON serialization insists on keeping those as 'boolean'
    parsed_args.capability_properties['headerEnabled'] = parsed_args.capability_properties['headerEnabled'].toString()
    parsed_args.capability_properties['footerEnabled'] = parsed_args.capability_properties['footerEnabled'].toString()

    // Resolve a @Singleton from the container context
    def capabilityRegistry = container.lookup(DefaultCapabilityRegistry.class.getName())
    def capabilityType = CapabilityType.capabilityType(parsed_args.capability_typeId)

    // Try to find an existing capability to update it
    DefaultCapabilityReference existing = capabilityRegistry.all.find {
        CapabilityReference capabilityReference -&amp;amp;amp;amp;amp;amp;amp;gt;
            capabilityReference.context().descriptor().type() == capabilityType

    // update
    if (existing) { + ' capability updated to: {}',
                capabilityRegistry.update(,, existing.notes(), parsed_args.capability_properties).toString()
    } else { // or create + ' capability created as: {}', capabilityRegistry.
                add(capabilityType, true, 'configured through api', parsed_args.capability_properties).toString()

Setting up a Maven Repository Proxy

    import groovy.json.JsonSlurper

    // unmarshall the parameters as JSON
    parsed_args = new JsonSlurper().parseText(args)

    // The two following data structures are good examples of things to look for via runtime inspection
    // either in client Ajax calls or breakpoints in a live server

    authentication = parsed_args.remote_username == null ? null : [
            type: 'username',
            username: parsed_args.remote_username,
            password: parsed_args.remote_password

    configuration = new Configuration(
            recipeName: 'maven2-proxy',
            online: true,
            attributes: [
                    maven  : [
                            versionPolicy: parsed_args.version_policy.toUpperCase(),
                            layoutPolicy : parsed_args.layout_policy.toUpperCase()
                    proxy  : [
                            remoteUrl: parsed_args.remote_url,
                            contentMaxAge: 1440.0,
                            metadataMaxAge: 1440.0
                    httpclient: [
                            blocked: false,
                            autoBlock: true,
                            authentication: authentication,
                            connection: [
                                    useTrustStore: false
                    storage: [
                            blobStoreName: parsed_args.blob_store,
                            strictContentTypeValidation: Boolean.valueOf(parsed_args.strict_content_validation)
                    negativeCache: [
                            enabled: true,
                            timeToLive: 1440.0

    // try to find an existing repository to update
    def existingRepository = repository.getRepositoryManager().get(

    if (existingRepository != null) {
        // repositories need to be stopped before any configuration change

        // the blobStore part cannot be updated, so we keep the existing value
        configuration.attributes['storage']['blobStoreName'] = existingRepository.configuration.attributes['storage']['blobStoreName']

        // re-enable the repo
    } else {

Setting up a Role

    import groovy.json.JsonSlurper

    // unmarshall the parameters as JSON
    parsed_args = new JsonSlurper().parseText(args)

    // some indirect way to retrieve the service we need
    authManager = security.getSecuritySystem().getAuthorizationManager(UserManager.DEFAULT_SOURCE)

    // Try to locate an existing role to update
    def existingRole = null

    try {
        existingRole = authManager.getRole(
    } catch (NoSuchRoleException ignored) {
        // could not find role

    // Collection-type cast in groovy, here from String[] to Set&amp;amp;amp;amp;amp;amp;amp;lt;String&amp;amp;amp;amp;amp;amp;amp;gt;
    privileges = (parsed_args.privileges == null ? new HashSet() : parsed_args.privileges.toSet())
    roles = (parsed_args.roles == null ? new HashSet() : parsed_args.roles.toSet())

    if (existingRole != null) {
    } else {
        // Another collection-type cast, from Set&amp;amp;amp;amp;amp;amp;amp;lt;String&amp;amp;amp;amp;amp;amp;amp;gt; to List&amp;amp;amp;amp;amp;amp;amp;lt;String&amp;amp;amp;amp;amp;amp;amp;gt;
        security.addRole(,, parsed_args.description, privileges.toList(), roles.toList())

The resulting role is available at Ansible Galaxy and on Github. It features the setup of :

  • Downloading and unpacking of Nexus
  • SystemD service unit
  • (optional) SSL-enabled apache reverse proxy
  • Admin password
  • LDAP
  • Privileges and roles
  • Local users
  • Blobstores
  • All types of repos
  • Base URL
  • Branding (custom HTML header & footer)
  • Automated jobs


Mastering the Thumbnail Generator with Liferay 7 CE and DXP

The Thumbnail Generator aims to improve and facilitate the generation of thumbnails provided by Liferay.

This plugin was created during a project requiring a large number of thumbnails with precise dimensions in order to minimize the loading time of the web pages. Currently, Liferay is only capable of generating two different sizes of thumbnails, when a new image is uploaded on an application (using the dl.file.entry.thumbnail.custom* settings of portal-ext.proprties). This new plugin, however, allows you to have full control over the number of thumbnails created as well as the way they are generated.

This article is structured as follows. After briefly describing the main components of this plugin, I will explain how to configure it in order to manage an unlimited number of thumbnails with Liferay.

I. Describing the Plugin Components

The Listeners
The Thumbnail Generator uses two Model Listeners to listen to ”persistence events” like the creation, modification and deletion of documents in Liferay application. A document can match any file’s type (text, image, video, pdf, …). Later, you will learn how to configure the plugins in order to process only relevant documents.

The first Listener listens for the creation and modification of a document, then it creates or updates the document’s thumbnails. The second listens for the deletion of a document, and deletes the thumbnails associated with this document in the aftermath.

The Servlet Filter
The Servlet Filter intercepts all requests calling for a document of the application and performs a series of validation before returning a thumbnail in response. It will first analyze the parameters of the query in order to know if a thumbnail is requested. Next, the filter is going to verify that the thumbnail does exist in order to finally return it to the author of the request. If one of these checks fails, the query will be ignored by the filter and it will follow its normal course – i.e. returning the original document requested.

The ThumbnailService
Lastly, the ThumbnailService handles the creation/deletion of the thumbnails and organizes them in the storage system of Liferay, using the plugin’s configuration.

II. Using the Plugins

The use of the Thumbnail Generator entails configuring the plugins and retrieving the thumbnails.

The Thumbnail Generator’s configuration page (Menu => Control Panel => Configuration => System Settings => Thumbnail Configuration) allows you to define two options:

  • The format of the files that will be processed by the plugin.
    For example, to restrict the creation of thumbnails for JPG and PNG files, simply add these formats to the configuration and all the other files will not be taken into account by the plugin.
  • The command lines that will generate the thumbnails.
    In order to define a thumbnail and to generate it, you need to add a line in the configuration with the following syntax : ‘name:command‘. The name will later provide access to this thumbnail, the command corresponds to the command line that will generate the thumbnail (see ImageMagick’s documentation to explore all possible options). For example : ‘img_480:convert ${inputFile} -resize 480×270 ${outputFile}‘ will generate a thumbnail of dimension 480×270 and that will be retrievable through its name « img_480 ».

Thumbnail Generator configuration page

In the above screenshot, three different thumbnails will be created for each JPG and PNG files uploaded in the application.

The plugin’s configuration not only allows the user to control the number of thumbnails to be generated, but also the way in which they are created. In this scenario, the ‘convert command’ comes from the powerful image editing library ImageMagick. Instead of this command, we could have used any other commands executable on the machine hosting the Liferay application.

Thumbnails’ Retrieval
Once the plugin is deployed and configured, it is ready for use. Thumbnails will be automatically generated each time a document is uploaded into your application. In order to retrieve the thumbnail of the document, you just have to add the parameter “thumb={thumbnailName}” in the URL using this document.

An Example of Thumbnail Retrieval Process

  • The URL of a document (test.jpg) on a local instance of Liferay looks like this : http://localhost:8080/documents/20147/0/test.jpg/0d72d709-3e48-24b3-3fe6-e39a3c528725?version=1.0&t=1494431839298&imagePreview=1
  • The URL of a thumbnail associated to this document, named img_480, can be called this way : http://localhost:8080/documents/20147/0/test.jpg/0d72d709-3e48-24b3-3fe6-e39a3c528725?version=1.0&t=1494431839298&imagePreview=1&thumb=img_480

III. Administration

In order to give more control to the user in the management of this module, an administration page (your site > Configuration >  Thumbnails administration) has been created allowing you to perform some actions on the thumbnails:

  • Re-generate all the thumbnails
  • Delete all the thumbnails
  • Delete orphans thumbnails (Which are no longer linked to any documents but are still present due to a change in the configuration)

Thumbnail Generator administration

In conclusion, this brief tutorial introduces to you the Liferay’s utility app called Thumbnail Generator and describes how to use, configure, retrieve the thumbnails and administer the plugin. Should you have any further questions or comments please contact us.

Ring Stable Version Released: Ring 1.0 Liberté, Égalité, Fraternité

21, 2017Savoir-faire Linux releases the stable version of Ring:  Ring 1.0 – Liberté, Égalité, Fraternité. Ring is a free/libre and universal communication platform that preserves the users’ privacy and freedoms. It is a GNU package. It runs on multiple platforms; and, it can be used for texting, calls, and video chats more privately, more securely, and more reliably.

About Ring

Ring is a fully distributed system based on OpenDHT technology and Ethereum Blockchain. It means, it does not need any central authority, an enterprise or even a server to function. Therefore, it avoids keeping centralized registries of users and storing their personal data. In addition, Ring is based on standard security protocols and end-to-end encryption. Therefore, it prevents decryption of communications over the network and consequently offering a high level of privacy and confidentiality.

Key Functionalities and Features

– Encrypted Audio/VideoHD/InstantMessaging Communications (ICE, SIP, TLS)
– Screen Sharing and Conferencing (Win32 and GNU/Linux)
– Support of Ethereum Blockchain as Distributed Public Users’ Database
– Distributed Communication Platform (OpenDHT)
– Platform Support on GNU/Linux, Windows UWP (Windows 10 and Surface), macOS (10.10+) and Android (4.0+)
– Distributed under GPLv3+ License
– Parts of Ring can be used as a building block in any Internet of Things (IoT) project

Ring: An Impactful and Inspirational Social Innovation

Ring is based on the state-of-the-art technologies (OpenDHT) and follows strict ethical guidelines. Together, a mix of free software technologies, and ethical rules offers end-users: leading edge privacy and anonymity, confidentiality as well as security of conversations. In addition, its stable connectivity and innovative standard functionalities over multitude of platforms make it a suitable choice for an everyday communication. 

Important Links
> Download Ring
> Contribute to Ring
> Technical documentation

How to Become a Contributor to WordPress? Baby Steps… To Big Dreams!

Are you passionate about the Web and Free Software Movement? Are you looking for an opportunity to play your part? Perhaps you’ve considered contributing to the WordPress project? This tutorial is for you! Our free software developers have contributed to the core of the application and now they’d love to introduce you to the development process of the most popular on-line publishing platform in the world. Cheer up and follow these steps to get started!

You can begin your journey by subscribing to a variety of platforms used by contributers. This helps you establish a connection with the community and remain in touch.

Subscribing to WordPress is a must. serves, among other things, to document procedures and publish information related to its development platform. This will be your main reference, so subscribe and log in.

Once you are connected, we invite you to consult the section of the website concerning Development and to get to know the different teams and their missions. This is also a great opportunity to subscribe to various newsletters if you wish to follow a particular stream of development (marketing, modules, themes, documentation, etc.)

People discuss their contributions via the Slack collaborative communication platform. They hold meetings, disseminate information and help users reach their contribution objectives. On Slack, you will find all the influential developers in the WordPress community, it is the ideal place to ask them questions!

Please read the subscription documentation carefully. The procedure can sometimes be confusing. If you are having trouble signing in to your account, visit the subscription page and the new instructions will show up. You can then sign in using this link:

Your last step before contributing! Check out the Trac ticket manager. Every change in WordPress is documented here. The main developers use this tool to approve and integrate changes to the core. To ensure effective, accurate and coordinated development, using documentation is mandatory. Now we can get started with developing for WordPress…

Following the Best Practices
Let the fun begin! Before writing code, you will need to integrate the project’s best practices and development standards. Some documents will be more useful to you than others. We suggest that you focus on these sections: Core Contributor Handbook, Best Practices, and Writing Patches. For PHP developers, you will also be interested in the PHP Coding Standards and Core API documentation.

The Environment
The majority of developers use Varying Vagrant Vagrants (VVV), which runs under all operating systems. VVV is an open source Vagrant configuration focused on WordPress development. It is mostly used to develop themes, modules, and plugins as well as for contributions to WordPress core. It may be a bit complex to install optional Vagrant modules, so if you are using a Linux environment, just make sure you have the “build-essential” and “libc6-dev” libraries before you get started. Feel free to work with other tools as well. VVV is not the only choice you have. But if you choose any other tools, please do not forget to report your developments on WordPress’ core code repository, to track the testing and progress of your contributions!

Here is an example of installing a development environment using VagrantPress and Git deployed with Ubuntu.

git clone
cd vagrantpress
vagrant up
rm -fr tests
git clone git:// tests
vagrant provision

SVN and Git
You have probably noticed that the code repository uses SVN. If you wish to contribute to the heart, we strongly recommend using it. But there is no obligation, it is also possible to go through Git. You will find the documents you need to use for these two versioning systems in the following libraries: How to Use Subversion for developers, plugin, codex for SVN and Using Git for the second.

CSS and Javascript
WordPress compresses some resources. To enable you to work, you must disable this function in the “wp-config.php”. Add “define (‘SCRIPT_DEBUG’, true);”.

Code Validation
WordPress code standards are most likely different from those you are used to. A code format checker can provide great help. Use PHP_CodeSniffer with the WordPress Coding Standards. You can also read the WordPress style guidelines for detailed installation instructions.

Test-based Contribution
Did you know that you do not have to be a seasoned developer to contribute to WordPress? For example, testing is a good way to participate in development while learning. Trac lists the corrections to be tested. If you are starting out, work first on non-urgent corrections.

Baby Steps… To Big Dreams!
Yes, contributing to a free software project is indeed a huge investment of your time. Reading, setting things up, making configurations, downloading, etc.

However, once you’ve passed the first steps and made your first few contributions, you will officially be a free software contributor! Now take this chance to make your first baby steps and realize your big dream of becoming a free software developer!


Liferay DXP on Azure: A Practical Case Study

When it comes to deploying a common Java server on a common cloud infrastructure, a range of possibilities crosses one’s mind. This article presents some feedback on a particular, real-life, case of deploying Liferay Digital Experience (DXP, aka. Liferay 7 EE) on Microsoft’s Azure cloud platform.

Initial Shopping List

  • Liferay DXP, with Enterprise Subscription, fitting in the compatibility matrix
  • Process of deploying Liferay DXP on Azure
  • Reliance on SaaS solutions as much as possible
  • Use of infrastructure & platform provisioned by Ansible

The Happy Path : Azure SaaS Building Blocks

In this process, we consistently use Azure’s SaaS platform. We set up the following three (3) services:

Service I : Azure SQL database

Our team configures Tomcat JDBC using driver sqljdbc42.jar from .

Instructions :

/etc/tomcat8/Catalina/localhost/ROOT.xml :


(note : there seems to be a new MS-SQL JDBC release going on github)

Service II : Azure files

We set up the Liferay document store library using the Advanced File System Store mounted locally on Liferay’s VM. For the mounting process we use CIFS from an Azure file share.

Instructions :

/etc/fstab :

//<storage account name><file share name> /var/lib/liferay/data-remote cifs vers=3.0,username=<storage account name>,password=<storage account key>,uid=tomcat8,gid=tomcat8 0 0

/var/lib/liferay/osgi/modules/ :


Service III : Azure Application Gateway

We use an Application Gateway to handle a)SSL termination ; and b)load-balancing of the web traffic between two Liferay instances (round robin only).

In order to convert the SSL certificate + chain + key to Azure’s .pfx bundle, use :

$> openssl pkcs12 -export -out <new pfx filename>.pfx -inkey <key filename>.key -in <certificate with chain filename>.crt -passout pass:<new pfx password>

The Plain Old Virtual Machine Part

Liferay DXP itself is installed on an Ubuntu Virtual Machine. This matches the recommended installation scheme for a production instance, which is deploying Liferay in the system tomcat8 installation. Liferay Hot Fixes and Service Packs can also be installed in this context, using Liferay’s patching-tool utility.

The Unexpected : Boundaries of Liferay’s Compatibility Matrix

Liferay DXP uses Elastic Search by default as its indexing and search provider. To stay within the compatibility matrix on production, Elastic Search needs to be hosted on a separate, dedicated Java Virtual Machine (JVM) (as opposed to the default development scheme which embeds ES alongside Liferay in a single JVM). Azure does not propose any Elastic Search as a service (managing your ES cluster is up to you). Furthermore, in order to secure a connection from Liferay DXP to a remote Elastic instance (e.g. Elastic cloud), you will need the extra “Liferay Elastic Shield Integration” marketplace item, purchasable as part of the Enterprise Search Add-On Subscription. This extra item would allow you to use Elastic Shield, though you cannot stay in the compatibility matrix today using Elastic Cloud, as its ES version is more recent than the one officially supported by Liferay.

Following this approach, we can install a 2.2 Elastic Search instance on a VM accessible from Liferay without having any need for Elastic Shield (on the same machine or internal network). Figure 1 below presents the summary of the global scheme explained here.

Figure 1. The Summary of the Global Scheme

Based on Figure 1 above, and the detailed explanations we can map three alternatives which can be evaluated and tested in future. First, it seems that the pre-packaged Liferay appliance allows for a single-click deployment on Azure but only for CE edition only. This is one avenue for further discovery. Second, we need to explore and evaluate the feasibility of pushing various Docker Liferay images to Azure Container Service and identify which ones do not offer this possibility. Third, is it possible to deploy Liferay as a custom Azure Web App? This is yet another issue to pursue further. For more insights on deploying Liferay Digital Experience Platform (DXP) on Microsoft’s Azure cloud platform stay tuned…