Three Chief Pillars of Our Success in Liferay: Experience, Expertise, and Community Embedment

            

Since 2010, we have been training, coaching and accompanying a range of enterprises and organizations in their digital transformation based on Liferay Portal solutions. Having received a number of awards and observed the rising success of our clients, today, we have unreserved confidence in what we can offer and how we can actually transform an organization through Liferay technology. In retrospect, we believe our successful performance rests upon three chief pillars (Figure 1):

  1. Our longtime experience accumulated through working on different projects,
  2. Our expertise rooted in our engineers’ know-how of Liferay, in particular, and open source technologies in general, and
  3. Our engineer-to-engineer relationship building capability which has placed us firmly embedded in Liferay expert community.
Figure 1. Successful Liferay Implementation

Experience and Expertise: Managing the Paradox of Catch-22 Situation

When it comes to deploying Liferay Portal in an organization’s hybrid information system, two main factors: experience and expertise, play a significant role. Experience is mainly built through having years of hands-on approach to deploying Liferay on different IS platforms in a myriad of service and manufacturing industries. Expertise, on the other hand, is a matter of accessing, creating, and accumulating abstract knowledge and know-how on the related technology fields. The paradox – the catch-22 situation – therefore is, the two feed on one another. Without expertise, a team cannot deliver on project promises and obtain an enriching experience, and without actually engaging in high impact projects one cannot gain solid expertise. Learning from mistakes, learning to tackle new problems, learning to understand new business models and forecasting needs, all translate into the indispensable interdependence between experience and expertise.

At Savoir-faire Linux, the team of enterprise solutions, which is responsible for Liferay-based projects, comprises 20 highly experienced open source software consultants half of them are Liferay certified. This means that they have successfully completed Liferay training programs and passed the certifying exams. Although Liferay expertise/experience is a necessary condition to successfully complete a Liferay project, it is not sufficient. Java experience/expertise is the complementary piece. For this critical reason, we have hired, trained, nurtured, and empowered a number of experienced Java developers who know about ‘ins and outs’ of Java; namely, technical specifications, philosophy, and ecosystem.

Liferay Community: A Place for Sharing Knowledge and Experience

Liferay community comprises the following: Liferay Projects, Community Projects, Community Feature Ideas, Releases, Forums, Blogs, Wiki, User Groups, Special Activities, Upcoming Events. Each of these elements is a window to countless opportunities for developers, users, researchers and enthusiasts to engage in a meaningful conversation leading to a helpful take-away such as experience, learning, know-how, or even social capital (See Picture 1). Wining five Community Excellence Awards has been a source of pride and gratitude for us since this is a testimony to our community embedment and collaborative software R&D.

Picture 1. An Illustration of Liferay Community Knowledge and Experience Exchanges

At Savoir-faire Linux, we make sure our Liferay experts (Click here to read an example) stay connected, deeply engaged, and collaborate on online open source platforms. In fact, it is one of our core beliefs that optimal software development process depends on open scientific knowledge sharing practices, as embodied in Open Source Initiative’s definition of OS and Debian Social Contract. We also believe, we have the moral obligation to hold ourselves accountable and give back to OS communities, the same way we receive from them. Another example would be the Liferay Montreal User Group which has been a successful initiative to bring developers, clients, and users together to exchange ideas and discuss future road maps.

Picture 2. Map and Logo of the Liferay Montreal User Group Source: https://web.liferay.com/community/user-groups/montreal/welcome


Experience, Expertise, and Community Embedment… So What?

Synergy or exponential knowledge growth is an invaluable gain in open source ecosystem. The real magic happens when you have a team of engineers who know what to do, who have done it a couple of times, and when faced with the unknown, they know how to figure it out and find the missing pieces and/or co-create them in collaboration with other gurus. In business world and organizational context, the synergy of the three pillars within open source software ecosystem leads to several benefits:

  1. business need is met through digital transformation,
  2. End users are satisfied because there is a value-added innovative service to enjoy,
  3. Development costs reduce while margins increase.

Not long ago, we have completed an award winning project called HuGo for Humania Assurance. HuGo Platform has won two prestigious national and provincial recognition: Digital Transformation Award, and SME Grand Prize OCTAS 2017 Lauréat (Picture 3).

Picture 3. Awards Won by HuGo Platform Using Liferay Portal Technology

These awards are tangible artifacts that showcase abstract elements underpinning a successful case of digital transformation. The dedication of top management of Humania Assurance, the expertise, experience and community efforts of Savoir-faire Linux, as well as the accumulated know-how on open source technologies over the past decades collectively create a synergistic and pronounced result. Bottom line is easy to read. The client’s brand name strengthens as an innovative SME in insurance industry. Their end users – the source of revenue generation – enjoy seamlessly a creative service/product which facilitates their insurance policy applications painstakingly, rapidly, and easily. Savoir-faire Linux gains yet another piece of experience, circulates the expertise in-house, and solidify its commitment to progressing Liferay community platform. (read the Case Study)

Shaping the Future Ensemble

We foresee three ways to better shape the future of Liferay based digital transformation. First, we invite enthusiasts to join our Liferay Montreal User Group and participate in the events. We want you to be heard, and to this end, we organize events and invite speakers like Raymond Augé, Senior Software Architect at Liferay (@rotty3000) to make sure you receive quality responses to your queries. Second, contact us and let us know about your specific expertise and experiences. We would love to have you on our team. Third, we are currently inviting new ideas to realize in collaboration with Liferay community, you may want to consider helping us push Liferay technological boundaries forward.

Press Review Inno #2

Translation by Jacob Cook

Mobility

FireStore: The New Real-time Database from Google

By Thibault Wittemberg

Mobile developers are often confronted with a recurring puzzle when they develop apps. It is the question of how best to manage data synchronization with a server back-end in both online and offline modes?

Google provides us with a new answer to this question with its FireStore service. This service is part of the Firebase suite and is currently in beta testing. It is presented as Google’s new preferred solution for real-time data storage. The solution is based on a document-oriented storage model (NoSQL), which means that we store collections of key/value data, like JSON objects.

FireStore can transparently manage the synchronization of data between multiple mobile devices in addition to the effortless handling of lost connections. This means that, in case of a network outage or disconnection, data remains locally accessible and updates to it will be performed transparently once the connection can be re-established. Below you can find two YouTube tutorials by Brian Advent. They show how easy the integration of FireStore can be in an example of messaging application.

Two FireStore SDK integration tutorials for iOS, by Brian Advent:

Also, you can access FireStore official documentation.

RxSwift 4, with Swift 4 Support, Announces the First Release Candidate Version

With each new release of the Swift programming language, developers work feverishly to update their libraries and support the latest version. This has definitely been the case for the developers behind RxSwift, who deployed the first release candidate of RxSwift 4 on October 8th.  This version is available via the different Swift package management solutions as well as on Github. The authors of RxSwift encourage open source contributions, for which you can find the rules in the repo.

Discovering the Architecture of Model-View-Intent

I thought I knew everything I needed to know about the MVI architecture, after all the name is fairly clear. For me, an Intent = new Intent(). With this, I initially believed it would fall into an architecture pattern based on Android Intents, and I was under the impression that a lot of spaghetti code would be required to replace RX by IntentReceivers/Broadcasters. This seemed like a big step back to me. However, during Droidcon NYC, my colleagues who were present were able to participate in a presentation on MVI where they gained invaluable and cool experiences.

I rolled up my sleeves and got to work in playing with MVI. First, I found this article by Hannes Dorfmann. It explains the advantages of MVI in a very didactic way. MVI addresses a problem that is often ignored by other patterns like MVVM or MVP: the state. By implementing a Reducer, we are able to manage the state changes of the application in a deterministic fashion.

Hannes Dorfmann proposes an implementation of MVI. However, it is written in Java and uses inheritance. The next step is to propose an implementation in Kotlin by composition/protocol.

Savoir-faire Linux Announces Gold Sponsorship at Liferay Symposium North America 2017

Montreal, QC – (October 11, 2017) – Savoir-faire Linux – a Canadian leader in providing expertise on a range of open source technologies to enable digital transformation strategies – announces today its participation as a Gold Sponsor at this year’s Liferay Symposium North America, hosted by Liferay.  Liferay makes software that helps companies create digital experiences on web, mobile and connected devices.Liferay Symposium North America will take place from October 16 to 17 in Austin.

This premier event for digital business and technology leaders will include two days of customer case studies, expert sessions, hands-on workshops, networking opportunities, access to Liferay’s top executives and architects, as well as the keynotes from digital experience thought leaders.

Savoir-faire Linux is proud to be a Gold Sponsor at the Liferay Symposium North America”, said Christophe Villemer (Executive VP). “We look forward to meeting other Liferay enthusiasts and offering our expert knowledge and experience in insurance, and banking as well as other sectors such as public services, education, health, mechanical and industrial to the attendees.”

This digital business technology event will showcase the company’s experience, expertise and excellence in Liferay technology field and it is poised to unveil their expertise in other domains of open source software as well.

To obtain more information on our Liferay services, please directly contact Marat Gubaidullin (VP Integration Platforms & Artificial Intelligence) through email (marat.gubaidullin@savoirfairelinux.com) or by phone (+1 514 276 5468   ext. 162).

What Could We Expect from Upcoming LDAPCon 2017?

The LDAPCon is an international conference on LDAP technology and the issues such as identity management, authentication and empowerment.

LDAPCon is a biennial event and this year it will take place from 19 to 20 October in iconic city of Brussels, the capital of Belgium, where the business of the European Union and NATO is run. In the past, LDAPCon has been held in other interesting places such as the following:

At Savoir-faire Linux, we have a team of motivated developers who are committed to LDAP community. We have sponsored this conference once in 2015 (please read the news here), and have already renewed our commitment by being their Silver Sponsor for this year as well.

Maybe Small, but a Mighty Gathering

2017 LDAPConf brings together 19 presentations and workshops within its 2-day program. Our engineer, Clément OUDOT is among the steering committee. This year’s program will showcase some interesting talks such as:

  • ReOpenLDAP: Modification of OpenLDAP for intensive use by a telecommunications operator
  • OpenLDAP – a new LDAP server for Samba4: An update on the integration of OpenLDAP in Samba 4 in place of the native directory coded by the Samba team
  • PHP-LDAP: News of the evolution of the LDAP API in PHP, abandoned for several years but now the developments are resumed
  • What’s New In OpenLDAP: News from the OpenLDAP project by Howard Chu, the main developer

Other exciting LDAP topics such as Cloud Identity Management, Authorizations / Authentication, Single Sign-On or Supervision will be addressed in the various other presentations.

Savoir-faire Linux’s representative will update various SSO protocols (CAS, SAML and OpenID Connect) in a given intervention on Friday afternoon, just before the presentation of the FusionDirectory software. The data management software of an LDAP directory , used in our internal infrastructure and for some of our customers will be discussed in his presentation.

If you are interested in this conference, you can book your tickets online on the conference website.

Developing An Ansible Role for Nexus Repository Manager v3.x

                                              


This article aims to explain how to automate the installation as well as configuration of Nexus’s Repository Manager version 3.x with Ansible
.

Ansible is a deployment tool, which enables playbooks to automate applications and infrastructure deployments. The key advantage is its flexibility to change applications with versatility as well as apprehending them as a service. However, Ansible has some weaknesses too. Following a voluntary simple design, it functions solely as an application of parameters without taking into account information availability and security concerns, which need to be handled by other systems. That is why developers will prefer to use Ansible in combination with a stateful agent system like Puppet, or a centralized management tool like Ansible Tower.

Automation with Ansible

Ansible focuses on the application-level setup, scripting a provisioning that can be run on top of any infrastructure-supporting tool (PaaS, containers, bare-metal, vagrant, etc.). It only needs an SSH connection and a sudo account to the remote system.

Provisioning scripts in Ansible are written in a declarative style using YAML files grouped as roles. The atomic instructions in those roles are expressed using a number of core modules provided by Ansible. Please have a look at the Ansible documentation for an in-depth introduction.

Re-provisioning and Updating Configuration

One of the DevOps’ models to handle configuration updates consists of provisioning a brand new environment from scratch and completely discarding the old one (think container images). This implies a reliable management of your data lifecycle. In our particular case of Nexus’s Repository Manager, this consists of several gigs of uploaded/proxied artifacts, some audit logs, and OrientDB blobs containing the configuration. Therefore, depending on one’s environment constraints, it can make sense to be able to update the configuration of an already-provisioned Nexus instance. The declarative nature of Ansible’s core instructions is inline with this purpose, but any custom logic written in a role should be idempotent, take the “create or maybe update” path into account.

One must also note that some parts of the Nexus configuration cannot be updated. Some examples include:

  • the settings related to BlobStores
  • the admin password if you ever loose the current one (update : or maybe through this way)

How to make Nexus’s Groovy API Fit Well with Ansible

The basic steps of the installation are pretty straightforward and can all be written using simple Ansible core modules:

  • download and unpack the archive
  • create a system user/group
  • create a systemd service

(these steps are in tasks/nexus_install.yml)

And then comes the surprise: Nexus configuration is not available in a simple text file format which can be edited with the help of simple Ansible instructions. It is stored in an embedded OrientDB database that must not be altered directly. The documented way to setup Nexus is either through its web user interface, or through its Integration API.

The way the Integration API works is as follows:

  1. Write a Groovy script that handles your configuration change;
  2. Upload it to Nexus with an HTTP PUT request, creating a REST resource for this script;
  3. Call the script through its HTTP GET/POST resource.

URI Module to the Rescue!

Ansible’s uri module makes HTTP requests, providing automation to all of this.

The first step is to upload the Groovy script on Nexus. Note that the script may already be there. Therefore, on re-runs of the playbook, we try to delete it before taking any action, just in case:

Through tasks/declare_script_each.yml, follow on:

  ---
  - name: Removing (potential) previously declared Groovy script {{</span> item }}
    uri:
      url: "http://localhost:8081/service/siesta/rest/v1/script/{{ item }}"
      user: 'admin'
      password: "{{ current_nexus_admin_password }}"
      method: DELETE
      force_basic_auth: yes
      status_code: 204,404

  - name: Declaring Groovy script {{ item }}
    uri:
      url: "http://localhost:8081/service/siesta/rest/v1/script"
      user: 'admin'
      password: "{{ current_nexus_admin_password }}"
      body_format: json
      method: POST
      force_basic_auth: yes
      status_code: 204
      body:
        name: "{{ item }}"
        type: 'groovy'
        content: "{{ lookup('template', 'groovy/' + item + '.groovy') }}"

The HTTP requests are executed from inside the target host, which is why localhost is used here. force_basic_auth: yes makes the HTTP client not wait for a 401 before providing credentials, as Nexus immediately replies with 403 when no credentials are passed. status_code is the expected HTTP status replied by Nexus. Since the Groovy script may not necessarily exist at that point, we must also accept the 404 status code.

The next step is to call the Groovy script that has been created through the previous HTTP call. Most of the scripts will take some parameters as input (e.g. create user <x>), and this is where Ansible and Groovy will help. Both coming from the ages of REST things, they can speak and understand JSON fluently.

On the Groovy script side :

import groovy.json.JsonSlurper
parsed_args = new JsonSlurper().parseText(args)
security.setAnonymousAccess(Boolean.valueOf(parsed_args.anonymous_access))

And to call this script from Ansible passing arguments:

  - include: call_script.yml
    vars:
      script_name: setup_anonymous_access
      args: # this structure will be parsed by the groovy JsonSlurper above
        anonymous_access: true

with call_script.yml:

  ---
  - name: Calling Groovy script {{ script_name }}
    uri:
      url: "http://localhost:8081/service/siesta/rest/v1/script/{{ script_name }}/run"
      user: 'admin'
      password: "{{ current_nexus_admin_password }}"
      headers:
        Content-Type: "text/plain"
      method: POST
      status_code: 200,204
      force_basic_auth: yes
      body: "{{ args | to_json }}"

This allows us to cleanly pass structured parameters from Ansible to the Groovy scripts, keeping the objects’ structure, arrays and basic types.

Nexus Groovy Scripts Development Tips and Tricks

Here are some hints that can help a developer while working on the Groovy scripts.

Have a Classpath Setup in Your IDE

As described in the Nexus documentation, having Nexus scripting in your IDE’s classpath can really help you work. If you automate the Nexus setup as much as possible, you will inevitably stumble against some undocumented internal APIs. Additionally, some parts of the API do not have any source available (e.g. LDAP). In such cases, a decompiler can be useful.

Since our role on Github uses Maven with the all the necessary dependencies, you can simply open it with IntelliJ and edit the scripts in files/groovy.

Scripting API Entry Points

As documented, there are four implicit entry points to access Nexus internals from your script:

  • core
  • repository
  • blobStore
  • security

Those are useful for simple operations, but for anything more complicated you will need to resolve services more in-depth:

  • through indirection from the main entry points: blobStore.getBlobStoreManager()
  • directly by resolving an inner @Singleton from container context: container.lookup(DefaultCapabilityRegistry.class.getName())

Take Examples from Nexus’s Source Code

Some parts of Nexus (7.4%, according to Github) are also written using Groovy, containing lots of nice code examples: CoreApiImpl.groovy .

Creating HTTP requests from the configuration web interface (AJAX requests) also provides some hints about the expected data structures or parameters or values of some settings.

Last but not least, setting up a remote debugger from your IDE to a live Nexus instance can help, since there are lots of places where a very generic data structure is used (like Map<String, Object>) and only runtime inspection can quickly tell the actual needed types.

Detailed Examples

Here are some commented examples of Groovy scripts taken from the Ansible role

Setting up a Capability

Capabilities are features of Nexus that can be configured using a unified user interface. In our case, this covers:

  1. anonymous access
  2. base public URL
  3. branding (custom HTML header/footer).

Instructions:


    import groovy.json.JsonSlurper
    import org.sonatype.nexus.capability.CapabilityReference
    import org.sonatype.nexus.capability.CapabilityType
    import org.sonatype.nexus.internal.capability.DefaultCapabilityReference
    import org.sonatype.nexus.internal.capability.DefaultCapabilityRegistry

    // unmarshall the parameters as JSON
    parsed_args = new JsonSlurper().parseText(args)

    // Type casts, JSON serialization insists on keeping those as 'boolean'
    parsed_args.capability_properties['headerEnabled'] = parsed_args.capability_properties['headerEnabled'].toString()
    parsed_args.capability_properties['footerEnabled'] = parsed_args.capability_properties['footerEnabled'].toString()

    // Resolve a @Singleton from the container context
    def capabilityRegistry = container.lookup(DefaultCapabilityRegistry.class.getName())
    def capabilityType = CapabilityType.capabilityType(parsed_args.capability_typeId)

    // Try to find an existing capability to update it
    DefaultCapabilityReference existing = capabilityRegistry.all.find {
        CapabilityReference capabilityReference -&amp;amp;amp;amp;amp;amp;amp;gt;
            capabilityReference.context().descriptor().type() == capabilityType
    }

    // update
    if (existing) {
        log.info(parsed_args.typeId + ' capability updated to: {}',
                capabilityRegistry.update(existing.id(), existing.active, existing.notes(), parsed_args.capability_properties).toString()
        )
    } else { // or create
        log.info(parsed_args.typeId + ' capability created as: {}', capabilityRegistry.
                add(capabilityType, true, 'configured through api', parsed_args.capability_properties).toString()
        )
    }

Setting up a Maven Repository Proxy

    import groovy.json.JsonSlurper
    import org.sonatype.nexus.repository.config.Configuration

    // unmarshall the parameters as JSON
    parsed_args = new JsonSlurper().parseText(args)

    // The two following data structures are good examples of things to look for via runtime inspection
    // either in client Ajax calls or breakpoints in a live server

    authentication = parsed_args.remote_username == null ? null : [
            type: 'username',
            username: parsed_args.remote_username,
            password: parsed_args.remote_password
    ]

    configuration = new Configuration(
            repositoryName: parsed_args.name,
            recipeName: 'maven2-proxy',
            online: true,
            attributes: [
                    maven  : [
                            versionPolicy: parsed_args.version_policy.toUpperCase(),
                            layoutPolicy : parsed_args.layout_policy.toUpperCase()
                    ],
                    proxy  : [
                            remoteUrl: parsed_args.remote_url,
                            contentMaxAge: 1440.0,
                            metadataMaxAge: 1440.0
                    ],
                    httpclient: [
                            blocked: false,
                            autoBlock: true,
                            authentication: authentication,
                            connection: [
                                    useTrustStore: false
                            ]
                    ],
                    storage: [
                            blobStoreName: parsed_args.blob_store,
                            strictContentTypeValidation: Boolean.valueOf(parsed_args.strict_content_validation)
                    ],
                    negativeCache: [
                            enabled: true,
                            timeToLive: 1440.0
                    ]
            ]
    )

    // try to find an existing repository to update
    def existingRepository = repository.getRepositoryManager().get(parsed_args.name)

    if (existingRepository != null) {
        // repositories need to be stopped before any configuration change
        existingRepository.stop()

        // the blobStore part cannot be updated, so we keep the existing value
        configuration.attributes['storage']['blobStoreName'] = existingRepository.configuration.attributes['storage']['blobStoreName']
        existingRepository.update(configuration)

        // re-enable the repo
        existingRepository.start()
    } else {
        repository.getRepositoryManager().create(configuration)
    }

Setting up a Role

    import groovy.json.JsonSlurper
    import org.sonatype.nexus.security.user.UserManager
    import org.sonatype.nexus.security.role.NoSuchRoleException

    // unmarshall the parameters as JSON
    parsed_args = new JsonSlurper().parseText(args)

    // some indirect way to retrieve the service we need
    authManager = security.getSecuritySystem().getAuthorizationManager(UserManager.DEFAULT_SOURCE)

    // Try to locate an existing role to update
    def existingRole = null

    try {
        existingRole = authManager.getRole(parsed_args.id)
    } catch (NoSuchRoleException ignored) {
        // could not find role
    }

    // Collection-type cast in groovy, here from String[] to Set&amp;amp;amp;amp;amp;amp;amp;lt;String&amp;amp;amp;amp;amp;amp;amp;gt;
    privileges = (parsed_args.privileges == null ? new HashSet() : parsed_args.privileges.toSet())
    roles = (parsed_args.roles == null ? new HashSet() : parsed_args.roles.toSet())

    if (existingRole != null) {
        existingRole.setName(parsed_args.name)
        existingRole.setDescription(parsed_args.description)
        existingRole.setPrivileges(privileges)
        existingRole.setRoles(roles)
        authManager.updateRole(existingRole)
    } else {
        // Another collection-type cast, from Set&amp;amp;amp;amp;amp;amp;amp;lt;String&amp;amp;amp;amp;amp;amp;amp;gt; to List&amp;amp;amp;amp;amp;amp;amp;lt;String&amp;amp;amp;amp;amp;amp;amp;gt;
        security.addRole(parsed_args.id, parsed_args.name, parsed_args.description, privileges.toList(), roles.toList())
    }

The resulting role is available at Ansible Galaxy and on Github. It features the setup of :

  • Downloading and unpacking of Nexus
  • SystemD service unit
  • (optional) SSL-enabled apache reverse proxy
  • Admin password
  • LDAP
  • Privileges and roles
  • Local users
  • Blobstores
  • All types of repos
  • Base URL
  • Branding (custom HTML header & footer)
  • Automated jobs

 

DebConf17: a Successful Event, a Cherished Memory, and a Promising Future

                               

During August 5-12, we actively participated in DebConf17, in several professional capacities: platinum sponsor, presenter, workshop and career fair participant, as well as social event host.

Debconf17 is the annual Debian Developers and Contributors Conference, with over 405 people attending from all over the world, 169 events including 89 talks, 61 discussion sessions or BoFs, 6 workshops and 13 other activities, DebConf17 has been hailed as a success. Indeed, we are grateful that we could be part of this fantastic, free software community-based and scientific event and play our part in its development. In what follows we provide a snapshot of our engagement activities.


The Honor of Being Part of Sponsorship Team of DebConf17

At Savoir-faire Linux, we are committed to building a sustainable economy based on cooperation, collaboration and knowledge sharing strategy.  We strongly believe, our strength depends on the quality of our partnership with, and support of, the community projects and the actors of the free software world. In order to fulfill our commitment, we have forged strong partnerships with and supported Free Software Foundation, Linux Foundation, Debian, Python, FFmpeg, and other open and free software projects. Naturally, when we heard that annual conference of Debian was going to be held in Montreal, we were thrilled and excited to be part of this great movement. In short, we think, one cannot build a freer world without supporting free software movement. And, Debian, is one of the gems of free software world.

Our Employees Already Falling Head over Hills in Love with DebConf17

Lucas Bajolet presenting his topic Unicode: a quick overview

The soonest our employees learned that DebConf17 was going to be in town, they started submitting their talks, presentations and workshops. We are yet to experience such a self-motivated dynamic and joyful wave of attention towards an event like this! After submissions, we had the following list of finalists announced on the official page of DebConf17’s schedule page.

Amir Taherizadeh

A Hot & Fun Career Fair!

Cyrille Béraud (President) engaging with free software developers at DebConf17

On Saturday, Aug 5, we launched the official career connect activity. Our president, Cyrille Béraud also made himself available to personally answer questions and meet with the pool of talent. It was a very successful networking event which lasted throughout the conference. We met with amazing, highly skilled free software hackers and had wonderful technical and social discussions with them. We received many CVs and some of them are now in the pipeline to be evaluated internally.

 

The Social Event: Ring on! Mix & Mingle with Ring Team

Ring is a free and universal communication platform that preserves the users’ privacy and freedoms. It is a GNU package. It runs on multiple platforms; and, it can be used for texting, calls, and video chats more privately, more securely, and more reliably.

The Social Event: Ring on! Mix & Mingle with Ring Team

On July 21, we released the stable version of Ring:  Ring 1.0 – Liberté, Égalité, Fraternité. However, since DebConf17 was around the corner, we postponed the celebration to share the merry moment with the DebConf free software developers. Our plan worked well! With the help of DebConf organizers we spread the news and in the evening of August 8 we received our guests. What a magnificent crowd!  Among our guests were Daniel Pocock (Debian), John Sullivan (Free Software Foundation), Michael Meskes (Credativ), and many other wonderful ladies and gentlemen. Cyrille Béraud made a very brief speech to thank all fre#e software developers contributing to Ring Project, and showed his special gratitude to the core development team for their countless hours put in to realize this milestone.

 

Stefano Zacchiroli: Debian Project Leader (on the left), Daniel Pocock from Debian (in the middle), John Sullivan from Free Software Foundation (on the right)
Amandine Gravier: Communications Manager (in the middle), Dorina Mosku: Ring Project Coordinator (on the right), Chloé Nignol from DeGama (on the left)


DebConf17 Coming to an End, but the Free Software Mission Continues!

The sad truth is that once again we had to say goodbye to another DebConf! But the word on the street is: DebConf18 is going to be even greater! No matter if one DebConf ends, because Debian Community is so great to make another great one the following year!

DebConf17 Family Photo

Mastering the Thumbnail Generator with Liferay 7 CE and DXP

The Thumbnail Generator aims to improve and facilitate the generation of thumbnails provided by Liferay.

This plugin was created during a project requiring a large number of thumbnails with precise dimensions in order to minimize the loading time of the web pages. Currently, Liferay is only capable of generating two different sizes of thumbnails, when a new image is uploaded on an application (using the dl.file.entry.thumbnail.custom* settings of portal-ext.proprties). This new plugin, however, allows you to have full control over the number of thumbnails created as well as the way they are generated.

This article is structured as follows. After briefly describing the main components of this plugin, I will explain how to configure it in order to manage an unlimited number of thumbnails with Liferay.

I. Describing the Plugin Components

The Listeners
The Thumbnail Generator uses two Model Listeners to listen to ”persistence events” like the creation, modification and deletion of documents in Liferay application. A document can match any file’s type (text, image, video, pdf, …). Later, you will learn how to configure the plugins in order to process only relevant documents.

The first Listener listens for the creation and modification of a document, then it creates or updates the document’s thumbnails. The second listens for the deletion of a document, and deletes the thumbnails associated with this document in the aftermath.

The Servlet Filter
The Servlet Filter intercepts all requests calling for a document of the application and performs a series of validation before returning a thumbnail in response. It will first analyze the parameters of the query in order to know if a thumbnail is requested. Next, the filter is going to verify that the thumbnail does exist in order to finally return it to the author of the request. If one of these checks fails, the query will be ignored by the filter and it will follow its normal course – i.e. returning the original document requested.

The ThumbnailService
Lastly, the ThumbnailService handles the creation/deletion of the thumbnails and organizes them in the storage system of Liferay, using the plugin’s configuration.

II. Using the Plugins

The use of the Thumbnail Generator entails configuring the plugins and retrieving the thumbnails.

Configuration
The Thumbnail Generator’s configuration page (Menu => Control Panel => Configuration => System Settings => Thumbnail Configuration) allows you to define two options:

  • The format of the files that will be processed by the plugin.
    For example, to restrict the creation of thumbnails for JPG and PNG files, simply add these formats to the configuration and all the other files will not be taken into account by the plugin.
  • The command lines that will generate the thumbnails.
    In order to define a thumbnail and to generate it, you need to add a line in the configuration with the following syntax : ‘name:command‘. The name will later provide access to this thumbnail, the command corresponds to the command line that will generate the thumbnail (see ImageMagick’s documentation to explore all possible options). For example : ‘img_480:convert ${inputFile} -resize 480×270 ${outputFile}‘ will generate a thumbnail of dimension 480×270 and that will be retrievable through its name « img_480 ».

Thumbnail Generator configuration page

In the above screenshot, three different thumbnails will be created for each JPG and PNG files uploaded in the application.

The plugin’s configuration not only allows the user to control the number of thumbnails to be generated, but also the way in which they are created. In this scenario, the ‘convert command’ comes from the powerful image editing library ImageMagick. Instead of this command, we could have used any other commands executable on the machine hosting the Liferay application.

Thumbnails’ Retrieval
Once the plugin is deployed and configured, it is ready for use. Thumbnails will be automatically generated each time a document is uploaded into your application. In order to retrieve the thumbnail of the document, you just have to add the parameter “thumb={thumbnailName}” in the URL using this document.

An Example of Thumbnail Retrieval Process

  • The URL of a document (test.jpg) on a local instance of Liferay looks like this : http://localhost:8080/documents/20147/0/test.jpg/0d72d709-3e48-24b3-3fe6-e39a3c528725?version=1.0&t=1494431839298&imagePreview=1
  • The URL of a thumbnail associated to this document, named img_480, can be called this way : http://localhost:8080/documents/20147/0/test.jpg/0d72d709-3e48-24b3-3fe6-e39a3c528725?version=1.0&t=1494431839298&imagePreview=1&thumb=img_480

III. Administration

In order to give more control to the user in the management of this module, an administration page (your site > Configuration >  Thumbnails administration) has been created allowing you to perform some actions on the thumbnails:

  • Re-generate all the thumbnails
  • Delete all the thumbnails
  • Delete orphans thumbnails (Which are no longer linked to any documents but are still present due to a change in the configuration)

Thumbnail Generator administration

In conclusion, this brief tutorial introduces to you the Liferay’s utility app called Thumbnail Generator and describes how to use, configure, retrieve the thumbnails and administer the plugin. Should you have any further questions or comments please contact us.

The Future of Open Source Software in Broadcasting Industry: SMPTE BootCamp 2017

                           

Savoir-faire Linux participated at the Society of Motion Picture and Television Engineers (SMPTE)’s BootCamp 2017 having the overarching topic of Media in the IP Era. This bootcamp was organized by the Montreal SMPTE Committee including the main actor, CBC (Radio-Canada), and was held at LÉcole de technologie supérieure (ÉTS), June 12-13, 2017.

Michel Proulx addressing the audience at SMPTE BootCamp 2017 in Montreal


The Event’s Focus and Our Role

The SMPTE, including the Montreal/Quebec chapter, has three key goals: educating players in the media and broadcasting industry, communicating the latest technological developments, as well as encouraging networking and interaction among industry stakeholders. This year, the SMPTE’s BootCamp 2017 rallied participants around the following topics:

a) IP transport and the SMPTE 2110 Standards,
b) Virtualization and software.

On this occasion, our open source software (OSS) consultants: Éloi Bail and Amir Taherizadeh, jointly delivered a talk entitled Open Source Software: A Tool for Digital Transformation in the Broadcasting Industry. The first part revealed the empirical results of our joint R&D project with Radio-Canada on “how to handle IP contents in the cloud”. This includes deploying FFmpeg OSS technology on a general purpose server in order to transmit raw data at speed of 3.5 Gbps without relying on specialized hardware broadcasting equipment. In addition, Éloi demonstrated to the audience the actual data transmission, and performance on the stage in real time with the help of two generic servers and a switch. This showcases for the participants the technical implications and potential of FFmpeg in broadcasting industry for the years to come.

Amir Taherizadeh and Éloi Bail while presenting on the stage.

The second part explored the nature, inherent attributes, myths, advantages, challenges, and licensing opportunities associated with OSS. It explains OSS as a relevant, significant, and ubiquitous tool in a variety of industries including, but not limited to, aerospace as well as media, entertainment and broadcasting industries. The aerospace industry presents an interesting case as it is somewhat comparable to broadcasting industry on three dimensions. It is a rather closed and highly standard-governed industry. It is capital intensive and advances rapidly. There is also a complex and symbiotic interrelationship between hardware and software components. Amir presented an example where the networking stack of the Linux kernel drives the multi-media equipment of an aircraft. This case demonstrates how value-added solutions can be created having adopted an open and collaborative value creation process. Indeed, OSS projects like the Linux kernel and FFmpeg are testimonies to collaborative software development where private companies and communities work together towards a common objective.

From left to right: Éloi Bail, Daniel Guevin (Radio-Canada), David Beaulieu (Radio-Canada), Amir Taherizadeh, Ash Charles, and Francois Legrand (Radio-Canada)

Overall, we really enjoyed being part of this event as it highlighted the opportunity for software transformation – including using open source software such as FFMpeg, GStreamer – in the broadcasting industry.

Ring Stable Version Released: Ring 1.0 Liberté, Égalité, Fraternité


July
21, 2017Savoir-faire Linux releases the stable version of Ring:  Ring 1.0 – Liberté, Égalité, Fraternité. Ring is a free/libre and universal communication platform that preserves the users’ privacy and freedoms. It is a GNU package. It runs on multiple platforms; and, it can be used for texting, calls, and video chats more privately, more securely, and more reliably.


About Ring

Ring is a fully distributed system based on OpenDHT technology and Ethereum Blockchain. It means, it does not need any central authority, an enterprise or even a server to function. Therefore, it avoids keeping centralized registries of users and storing their personal data. In addition, Ring is based on standard security protocols and end-to-end encryption. Therefore, it prevents decryption of communications over the network and consequently offering a high level of privacy and confidentiality.

Key Functionalities and Features

– Encrypted Audio/VideoHD/InstantMessaging Communications (ICE, SIP, TLS)
– Screen Sharing and Conferencing (Win32 and GNU/Linux)
– Support of Ethereum Blockchain as Distributed Public Users’ Database
– Distributed Communication Platform (OpenDHT)
– Platform Support on GNU/Linux, Windows UWP (Windows 10 and Surface), macOS (10.10+) and Android (4.0+)
– Distributed under GPLv3+ License
– Parts of Ring can be used as a building block in any Internet of Things (IoT) project

Ring: An Impactful and Inspirational Social Innovation

Ring is based on the state-of-the-art technologies (OpenDHT) and follows strict ethical guidelines. Together, a mix of free software technologies, and ethical rules offers end-users: leading edge privacy and anonymity, confidentiality as well as security of conversations. In addition, its stable connectivity and innovative standard functionalities over multitude of platforms make it a suitable choice for an everyday communication. 

Important Links
> Download Ring
> Contribute to Ring
> Technical documentation

How to Become a Contributor to WordPress? Baby Steps… To Big Dreams!

Are you passionate about the Web and Free Software Movement? Are you looking for an opportunity to play your part? Perhaps you’ve considered contributing to the WordPress project? This tutorial is for you! Our free software developers have contributed to the core of the application and now they’d love to introduce you to the development process of the most popular on-line publishing platform in the world. Cheer up and follow these steps to get started!

Subscribe
You can begin your journey by subscribing to a variety of platforms used by contributers. This helps you establish a connection with the community and remain in touch.

WordPress
Subscribing to WordPress is a must. WordPress.org serves, among other things, to document procedures and publish information related to its development platform. This will be your main reference, so subscribe and log in.

Once you are connected, we invite you to consult the section of the website concerning Development and to get to know the different teams and their missions. This is also a great opportunity to subscribe to various newsletters if you wish to follow a particular stream of development (marketing, modules, themes, documentation, etc.)

Slack
People discuss their contributions via the Slack collaborative communication platform. They hold meetings, disseminate information and help users reach their contribution objectives. On Slack, you will find all the influential developers in the WordPress community, it is the ideal place to ask them questions!

Please read the subscription documentation carefully. The procedure can sometimes be confusing. If you are having trouble signing in to your WordPress.org account, visit the subscription page and the new instructions will show up. You can then sign in using this link: https://wordpress.slack.com/.

Trac
Your last step before contributing! Check out the Trac ticket manager. Every change in WordPress is documented here. The main developers use this tool to approve and integrate changes to the core. To ensure effective, accurate and coordinated development, using documentation is mandatory. Now we can get started with developing for WordPress…

Following the Best Practices
Let the fun begin! Before writing code, you will need to integrate the project’s best practices and development standards. Some documents will be more useful to you than others. We suggest that you focus on these sections: Core Contributor Handbook, Best Practices, and Writing Patches. For PHP developers, you will also be interested in the PHP Coding Standards and Core API documentation.

The Environment
The majority of developers use Varying Vagrant Vagrants (VVV), which runs under all operating systems. VVV is an open source Vagrant configuration focused on WordPress development. It is mostly used to develop themes, modules, and plugins as well as for contributions to WordPress core. It may be a bit complex to install optional Vagrant modules, so if you are using a Linux environment, just make sure you have the “build-essential” and “libc6-dev” libraries before you get started. Feel free to work with other tools as well. VVV is not the only choice you have. But if you choose any other tools, please do not forget to report your developments on WordPress’ core code repository, to track the testing and progress of your contributions!

Here is an example of installing a development environment using VagrantPress and Git deployed with Ubuntu.

git clone https://github.com/vagrantpress/vagrantpress.git
cd vagrantpress
vagrant up
rm -fr tests
git clone git://develop.git.wordpress.org/ tests
vagrant provision

SVN and Git
You have probably noticed that the code repository uses SVN. If you wish to contribute to the heart, we strongly recommend using it. But there is no obligation, it is also possible to go through Git. You will find the documents you need to use for these two versioning systems in the following libraries: How to Use Subversion for developers, plugin, codex for SVN and Using Git for the second.

CSS and Javascript
WordPress compresses some resources. To enable you to work, you must disable this function in the “wp-config.php”. Add “define (‘SCRIPT_DEBUG’, true);”.

Code Validation
WordPress code standards are most likely different from those you are used to. A code format checker can provide great help. Use PHP_CodeSniffer with the WordPress Coding Standards. You can also read the WordPress style guidelines for detailed installation instructions.

Test-based Contribution
Did you know that you do not have to be a seasoned developer to contribute to WordPress? For example, testing is a good way to participate in development while learning. Trac lists the corrections to be tested. If you are starting out, work first on non-urgent corrections.

Baby Steps… To Big Dreams!
Yes, contributing to a free software project is indeed a huge investment of your time. Reading, setting things up, making configurations, downloading, etc.

However, once you’ve passed the first steps and made your first few contributions, you will officially be a free software contributor! Now take this chance to make your first baby steps and realize your big dream of becoming a free software developer!