GerritForge helps Gerrit 3.0 stability

Gerrit 3.0 plan announced: we need stabilisation now

Screen Shot 2015-11-23 at 09.52.19

Gerrit 3.0 plan and its NoteDB reviews have been officially announced at the Gerrit User Summit 2015. It is already available as an experimental feature in the current Gerrit master but it needs much more stability in order to be officially supported for production.
GerritForge decided to help and reuse its existing continuous integration system to validate every Gerrit patch set against the current and the new NoteDB review persistence back-end in order to avoid regressions during the 2.13 and 3.0 development

Pre-commit validation by GerritForge CI

If you have posted a patch to gerrit-review.googlesource.com in November, you may have hopefully received a Verified+1 from a strange user with a Diffy logo on the side.
GerritForge’s provided CI on gerrit-ci.gerritforge.com fetches automatically every patch-set pushed to gerrit-review.googlesource.com and triggers a slightly modified Gerrit build with the purpose of checking whether the code change introduces a regression or not. This may seem at first sight a quite normal Gerrit to Jenkins job integration, however implementing it on top of Google’s multi-master replicated installation was not a piece of cake.

Gerrit Trigger plugin limitations on multi-master setups

Jenkins has already an out-of-the-box integration with Gerrit provided by the Gerrit Trigger plugin maintained by Robert Sandell – Cloudbees. It leverages the Gerrit stream events through an SSH channel and make use of Gerrit REST API to action them according to the build result.
The Google’s Gerrit setup, however, is not a trivial one-node installation and is further limited by the security constraints of the Google infrastructure, which does not allow any incoming SSH connectivity.
Additionally all concept of “getting the events in a stream” isn’t going to work when events can come concurrently from multiple places at the same time: who is going to define the “global ordering” and how to put all those events in a single TCP/IP Socket? Even UDP would not work in this case because SSH channel requires confidentiality between two and only two peers.

Alternatives to SSH

During the hackathon, other approaches have been discussed by Shawn Pearce, including the use of HTTP WebSockets (or Cometd) for fetching events without the need of an SSH connection. Events are still distributed and generated by multiple masters all the time, and the Jenkins plugin would then have the onus of contacting all the Gerrit servers and keep a connection opened to all of them. This is clearly not going to work because the number of servers, their IPs and locations may change at any time and the solution would eventually be in danger of losing precious events.

Back to polling

The only solution we envisaged was to fall back to a polling logic where Jenkins ever 10 minutes is asking Gerrit “what’s new since last time we spoke?”. This solution goes against the main reason the Gerrit Trigger plugin was designed: avoiding SCM polling. It is, however, a much better and optimised polling strategy and let’s see why.

Query and then fetch

The typical Git SCM polling relies on fetching all references every poll interval and detect if new Git commits are available. This is notably slow and generates a huge overhead on the Git server. The approach we took is quite different and makes use of the Gerrit search capabilities that are way faster and more powerful than a simple Git fetch.
Jenkins first ask Gerrit the list of changes and associated commit-IDs involved in any event since the last polling time: the result may include patchsets that have been already built to avoid having any gaps between polling intervals. The search is fast and implemented in … you know, Google is a search company isn’t it?
Once the list of candidate commit-ids is identified, Jenkins goes through all of them and checks using the Gerrit REST-API:
– has it been build during my previous execution?
– has it been already accepted (or rejected) by me?
The Commit-IDs that results as not being checked before and not yet validated are then used to trigger a specific job parametrised on:
– Specific branch
– Specific change ref-spect
Fetching is performed avoiding any wildcard and the corresponding load on the Git server is minimum. Fetch (Git protocol) + build (using Buck) + test (unit + integration) + review feedback (REST API) is taking an average of 5 minutes, which is an amazing result if you consider the size of the Gerrit project and the typical slow speed of a default Jenkins Git fetch.

The bottom line

Using the query + fetch approach, which seemed a bit slow and old-fashioned at the beginning, was eventually very simple and successful. Instead of setting up SSH hostkey verification, key exchange and ad-hoc channels, the only configuration needed is a valid Gerrit user and the HTTPS endpoint URL, the same used for cloning the code.
The solution is much more reliable as SSH channels are notably unstable and consume server threads. The only drawback is the slight delay between the patch-set upload the start of the build (at max 10 minutes) which is acceptable in most cases.
Results
Since its roll-out more than 1200 patches have been checked and rated, a lot of potentially Gerrit regression avoided and more importantly we have prevented the NoteDB code to start diverging regarding stability from the current mainstream development.

How can re-trigger validation for a single change?

We have enabled anyone to trigger ad-hoc executions of the Gerrit validation flow using the following URL:
https://gerrit-ci.gerritforge.com/job/Gerrit-verifier-flow/build
This is a standard Jenkins parametrized build that request the change-id to be built, as either SHA1 or number. Once the job is triggered the build will be executed and the validation feedback applied to your change, regardless of the previous build or validation status.

GerritHub user-controlled GitHub Scopes

Nowadays people are very careful about privacy and user data: nobody grants access to their profile without checking first the possible consequences.
We want to give the user the ability always to know and control what level of access is given to their data: that’s why we improved the way you login in GerritHub.io.

GitHub scopes: what is it?

GitHub provides the authentication and access to user’s profile using a protocol called OAuth 2.0. When GerritHub is requesting a user to authenticate is then granted a set of permissions to operate on behalf of the user on their GitHub resources, which include:

  • User’s personal data (name, e-mails)
  • User’s membership to organisations and teams
  • User’s repositories

The set of permissions to access and operate on your data is also known as “Scope” in GitHub terms.

How is GerritHub helping me to control my access?

GerritHub has from today a new “Scope Selection” screen with two main objectives:

  1. Displaying your current scope and associated rights GerritHub has on your GitHub profile
  2. Giving you the ability to switch to a different “Scope” and consequently the rights that GerritHub has on your profile data

Screen Shot 2015-10-08 at 10.16.53

Transparency is good, but what is the practical added value?

There has been in the past a common complaint about GerritHub having too much or too little access to your GitHub profile:

  • Too much? Why GerritHub.io needs access to my e-mail address? Why does GerritHub need to see my public keys?
  • Too little? Why does GerritHub not show my private repositories in the import screen? How can I see my organisation membership in GerritHub project security screen?

With the ability now to visualise and change the current “Scope”, people can be more aware of why things are not showing up. They can make conscious decisions about how to change them with full transparency on the associated implications.

A common scenario: importing and accessing private GitHub Organisations, Teams and Repositories.

When you need to import an existing private GitHub project, you need to access information that is not publicly available:

  • Your membership to a private organisation
  • Your ownership of a Team structure
  • Ability to clone and push your private organisations’ repositories

There is now a special information box suggesting that you have the ability to change your “Scope” if you don’t see the organisations and repositories you want to import.

Screen Shot 2015-10-08 at 10.22.12

After changing the scope, you can then log in again and you will have an improved set of options to get more data and repositories from your GitHub account.

Like it? Will you use it on a daily basis?

We are eager to get your feedback on this new feature: Tell us what you think and let us know what you would change or add to the set of “Scope” permissions.

Gerrit Code Review and Jenkins Continuous Delivery Pipeline on BigData

Gerrit at the Jenkins User Conference 2015 – London

For the very first time, CloudBees organised a full User Conference in London and we have been very pleased to speak to present a real-life case-study of Continuous Integration and Continuous Delivery applied to a large-scale BigData Project.

See below a summary of the overall presentation published on the above YouTube video.

The trap of the BigData production phase

BigData has been historically used by data scientists in order to analyse data and extract  features that are relevant for the business. This has typically been a very interactive process happing mostly on “notebook-style” environments where almost everything, from ad-hoc queries and graphs, could have been edited and executed interactively. This early stage of the process is typically known as “exploration” or “prototype analysis” phase. Sometimes last only a few days but often is used as day-by-day modus operandi.

However when the exploration phase is over, projects needed to be rewritten or adapted using a programming language (Scala, Python or Java) and transformations and aggregations expressed in jobs. During the “production-isation” phase code needs to be properly written and tested to be suitable for production.

Many projects fall into the trap of reducing the “production phase” to a mere translation of notebooks (or spreadsheets) into Scala, Java or Python code, relying only on the manual analysis of the resulting data as unique testing methodology. The lack of software engineering practices generates complex monolithic code,  difficult to maintain, to understand and thus to validate: the agility of the initial “exploration” phase was then miserably lost in the translation into production code.

Why Continuous Delivery on BigData?

We have approached the development of BigData projects in a radically different way: instead of simply relying on existing tools, often not enough for setting up a proper Agile Delivery Pipeline, we introduced brand-new frameworks and applied them to the building blocks of a Continuous Delivery pipeline.

This is how Stefano Galarraga wrote started the ScaldingUnit project, aimed in de-composing the development of complex Scalding MapReduce jobs in simple and testable units.

We started then to benefit from the improved Agility and speed of delivery, giving constant feedback to data-scientists and delivering constant value to the Business stakeholders during the production phase. The talk presented at the Jenkins User Conference 2015 is smaller-scale show-case of the pipeline we created for our large clients.

Continuous Delivery Pipeline Building Blocks

In order to build a robust continuous delivery pipeline, we do need a robust code-base to start with: seems a bit obvious but is often forgotten. The only way to create a stable code-base,  collectively developed and shared across different [distributed] Teams, is to adopt a robust code review lifecycle.

Gerrit Code Review is the most robust and scalable collaboration system that allows distributed teams to submit their changes and provide valuable feedback about the building blocks of the BigData solution. Data scientists can participate as well during the early stage of the production code development, giving suggestions and insight on the solution whilst is still in progress.

Docker provided the pipeline with the ability to define a set of “standard disposable systems” to host the real-life components of the target runtime, from Oracle to a BigData CDH Cluster.

Jenkins Continuous Integration is the glue that allowed coordinating all the different actors of the pipeline, activating the builds based on the stream events received from Gerrit Code Review and orchestrating the activation of the integration test environments on Docker.

Mesos and Marathon managed all the physical resources to allow a balanced allocation of all the Docker containers across the cluster. Everything has been managed through Mesos / Marathon, including the Gerrit and Jenkins services.

Pipeline flow – Pushing a new change to Gerrit Code Review

The BigData pipeline starts when a new piece of code is changed on the local development environment. Typically developers test local changes using the IDE and the Hadoop “local mode” which allows the local machine to “simulate” the behaviour of the runtime cluster.

The local mode testing is typically good enough for running unit-tests but often is unable to detect problems (e.g. non-serialisable objects, compression, performance) that are likely to appear in the target BigData cluster only. Allowing to push a code change to a target branch without having tested on a real cluster represent a potential risk of breaking the continuous delivery pipeline.

Gerrit Code Review allows the change to be committed and pushed to the Server repository and built on Jenkins Continuous Integration before the code is actually merged into the master branch (pre-commit validation).

Pipeline flow – Build and Unit-tests execution

Jenkins uses the Gerrit Trigger Plugin to fetch the code currently under review (which is not on master but on an open change) and triggers the standard Scala SBT build. This phase is typically very fast and takes only a few seconds to complete and provide the first validation feedback to Gerrit Code Review (Verified +1).

Until now we haven’t done anything special of different than a normal git-flow based continuous integration: we pushed our code and we got it validated in Jenkins before merging it to master. You could actually implement the pipeline until this point using GitHub Pull Requests or similar.

Pipeline flow – Integration test automation with a real BigData Cloudera CDH Cluster

Instead of considering the change “good enough” after a unit-test validation phase and then automatically merging it, we wanted to go through a further validation on a real cluster. We have completely automated the provisioning of a fully featured Cloudera CDH BigData cluster for running our change under review with the real Hadoop components.

In a typical pipeline, integration tests in a BigData Cluster are executed *after* the code is merged, mainly because of the intrinsic latencies associated to the provisioning of a proper reproducible integration environment. How then to speed-up the integration phase without necessarily blocking the development of new features?

We introduced Docker with Mesos / Marathon to have a much more flexible and intelligent management of the virtual resources: without having to virtualise the Hardware we were able to spawn new Docker instances in seconds instead of minutes ! Additionally the provisioning was coordinated by the Docker Build Step Jenkins plugin to allow the orchestration of the integration tests execution and the feedback on Gerrit Code Review.

Whenever an integration test phase succeeded or failed, Jenkins would have then submitted an “Integrated +1/-1” feedback to the original Gerrit Code Review change that triggered the test.

Pipeline flow – Change submission and release

When a change has received the Verified+1 (build + unit-tests successful) and Integrated+1 (integration-tests successful) is definitely ready to be reviewed and submitted to the master branch. The additional commit triggers the final release build that tags the code and uploads it to Nexus ready to be elected for production.

Pipeline flow – Rollout to production

The decision to rollout to production with a new change is typically enabled by a continuous delivery pipeline but manually operated by the Business stakeholders. Even though we could *potentially* rollout every change, we did not want *necessarily* do that because of the associated business implications.

Our approach was then to publish to Nexus all the potential *candidates* to production and roll-them-out to a pre-production environment, ready to be assessed by Data-Scientists and Business in real-time. The daily job scheduler had a configuration parameter that simply allowed to “pointing” to the version of the code to run every day. In this way whatever is deployed to Nexus is potentially fully working in production and rollout or rollback a release is just a matter of changing a label in the daily job scheduler.

Summary

Building a Continuous Delivery Pipeline for BigData has been a lot of fun and improved the agility of the Business in rolling out changes more quickly without having to compromise on features or stability.

When using a traditional Continuous Integration pipeline, the different stages (build + unit-test, integration-tests, system-tests, rollout) are all happening on the target branch causing it to be amber or red at times: whenever tests are failing the pipeline need to be restarted from start and people are blocked.

By adopting a Code Review-driven Continuous Integration Pipeline we managed to get the best of both worlds, avoiding feature branches but still keeping the ability to validate the code at each stage of the pipeline and reporting it back to the original change and the associated developer without to compromise the stability of the target branch or introducing artificial and distracting feature branches.

Resources

The slides of the talk are published on SlideShare.

All the docker images used during the presentation are available on GitHub:

Gerrit Ver. 2.11 is now available for download

Gerrit version 2.11 is now available.

Release highlights:

  • Inline editing in the browser
  • Removal of the old change screen
  • Many improvements in the change screen UI

Release notes:
https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.11.html

Download:
https://gerrit-releases.storage.googleapis.com/gerrit-2.11.war

Documentation:
https://gerrit-documentation.storage.googleapis.com/Documentation/2.11/index.html

Native packages:
If you have already installed Gerrit 2.10 using them, you need to execute:

(on Debian / Ubuntu)

apt-get update & apt-get install gerrit

(on CentOS / RedHat)

yum clean all && yum install gerrit

If you have not configured yet the GerritForge / BinTray repositories, please follow the instructions at:
https://gitenterprise.me/2015/02/27/gerrit-2-10-rpm-and-debian-packages-available/

Yet again, Gerrit delivers new features and cutting-edge technology improvements for Git and Code Review for the Enterprise!

GitMinutes #30: Luca Milanesio on Gerrit Code Review

git-minutesMany thanks to Thomas Ferris Nicolaisen for inviting me to talk about Gerrit Code Review at GitMinutes.

It has been a very interesting discussion on the benefits of Code Review and how Gerrit can help out small and large companies embracing it.

The interview is available on-line at http://episodes.gitminutes.com/2014/07/gitminutes-30-luca-milanesio-on-gerrit.html, alternatively you can download and listen the 1h 27′ conversation on PodCast at https://itunes.apple.com/de/podcast/gitminutes-podcasts/id637843725?l=en.

Use the force Luca!

We started (of course!) talking about the [in]famous force push of 186 Jenkins repositories to GitHub, I was on the Top-10 HackersNews over 7h … so I was expecting the question to pop-up during the interview 🙂

My friend Alex Blewitt took the opportunity as well to forge a Star-Wars like headline for his InfoQ article on what happened.

Git adoption in the Enterprise, where all began

We moved the discussion to the foundation of my business on Git and Code Review and the reasons and challenges that an Enterprise company is facing when moving to Git. We went through the history on how LMIT started GitEnterprise.com and then focused on Gerrit Code Review based product and services for large Enterprises World-Wide: a niche and successful business nowadays.

GitHub or Gerrit? or both with GerritHub?

As I expected, we ended up comparing GitHub and Gerrit analysing the similarities and differences between the two. This topic has been presented as well in two conferences at Gerrit User Summit @GooglePlex – Mountain View CA and 33rd Degree.org Java Developers Conference in Krakow; slides are available at http://www.slideshare.net/lucamilanesio/gerrit-codereviewgit-hubplugin.

Gerrit has historically been considered as “more difficult” than GitHub: true in the past but not anymore today apart from the Web User-Experience CSS styling, much nicer and pleasent on GitHub. The availability of http://gerrithub.io allowed over 1,800 developers since October 2013 to get started with Gerrit in less than 5 minutes by watching an Gerrit Introductionary YouTube video: using it was then just 3 clicks away, no installation or configuration needed! The availability of an easy and accessible Public Cloud instance represents a big improvement in accessibility and usability of Gerrit.

For which teams is Gerrit the right choice?

We talked about the “typical learning curve” of people coming from previous version control systems, such as Subversion. Does it make sense to get started with Git and Gerrit at the same time? When is Gerrit needed and when is it going to provide most of its value?

I’ve covered the topic in the past webinars and talks: hands-on Webinars recordings are freely available on-line at:

The size of the project (in terms of number of people x number of repositories) is typically one of the key factors in Code Review adoption. Gerrit however can be used as well as a standalone OpenSource Git Server , even without leveraging its Code Review capabilities: this makes the choice of Gerrit a good first step towards a smoother Git adoption.

What are Gerrit Topics about?

We went through a very interesting discussion about “Gerrit Topic”, a feature that is not new to Gerrit but is sometimes forgotten besides its important and relevance for medium-large teams.

With the forthcoming support of multi-repositories atomic commits in Gerrit, it will be possible to merge multiple changes on multiple repositories at the same time for a single topic. This feature is not ready yet but coming hopefully in the near future and Google Gerrit Team developers and contributors are working on it.

The ability to make an atomic commit across multiple repositories will allow to have a more consistent Jenkins build process as well, with less broken builds because of interdependent changes on multiple components.

Who is using Gerrit today?

We talked about the adoption of Gerrit in the community, which is growing year after year. A lot of medium companies adopted Gerrit in the past, including Spotify side-by-side with GitHub.

The ability to “submit a change” to any project without the risk to break the build is definitely an incentive to encourage even more people to contribute to share the knowledge and improve the code base, without the risk of breaking anything or  forking the code. This is one of the reason that drove large OpenSource organisations such as the Eclipse Foundation and OpenStack to the adoption Gerrit Code Review in their tools platform.

How to embrace Code Review in a Team or Company?

We went through an interesting comparison / discussion of Agile Methodology vs. Code Review. Often Teams misunderstand and confuse the concept of “review” with “pair-programming”: the problem was well analysed in my book “Learning Gerrit Code Review” (available on Amazon.com at http://www.amazon.com/Learning-Gerrit-Code-Review-Milanesio/dp/1783289473). I defined the pair-programming as a dot in a time/people space: two developers writing a piece of code at the same time. This however does not exclude all the other points in the time/people space where multiple people at different times will read the code and provide their feedback: pair-programming is then a “specific example” of the “code review space”.

Because of the different perspectives (pair-programming is a dot whilst code-review is a “cloud of dots” in time/people space) they are not one exclusive of the other: they are equally important and both enable effective collective code ownership and knowledge sharing.

References and greetings.

It has been a very long but interesting discussion with Thomas and hope you’ll enjoy it.

See below the links of the resources we mentioned during the interview:

Thanks again to Thomas for his fantastic initiative: GitMinutes PodCast!

Luca Milanesio 

Heartbleed: GitEnterprise and GerritHub are safe

heartbleedA few days ago a large part of the Word Wide Web has been found vulnerable to the heartbleed bug in OpenSSL.

What is the vulnerability about?

The vulnerability is effectively a bug in all the versions of OpenSSL from Ver. 1.0.1 to 1.0.2. In reality a lot of web-sites are either using the older and still popular OpenSSL 0.9.8 or they have already upgraded to the latest patched version of OpenSSL and thus are NOT vulnerable to heartbleed.

 

Are you passwords safe ?

In a nutshell yes when they are posted or exchanged with a server that is not vulnerable to this attach:

  • GitEnterprise (gitent-scm.com) has never used any OpenSSL 1.0.1-1.0.2 (see: https://www.ssllabs.com/ssltest/analyze.html?d=gitent%2dscm.com) and thus is not vulnerable: you can keep your existing password as they are safe.
  • GerritHub (gerrithub.io) has been vulnerable for only 5 days and then has been upgraded (see https://www.ssllabs.com/ssltest/analyze.html?d=gerrithub.io). However GerritHub DOES NOT exchange passwords over the Internet but rely on your existing GitHub session through OAuth Token authentication. This means that during the 5 days of vulnerability your account has NOT been at risk on GerritHub.

What about GitHub ?

Unfortunately GitHub has been vulnerable (see https://github.com/blog/1818-security-heartbleed-vulnerability) but the problem has been resolved or is under resolution right now as the nodes get upgraded.

We do recommend then to change your GitHub password in order to be sure that any previous credentials potentially stolen would not impact the security of your account and repositories.

GerritHub relies on GitHub OAuth, so is GerritHub at risk as well ?

In real terms the answer is “potentially yes”: if a potential attacker had been stolen your GitHub password, he could have initiated a login on your behalf and then accessed GerritHub as well.

How can I strengthen my GitHub  security ?

GitHub already support today the two-factor authentication (see https://help.github.com/articles/about-two-factor-authentication): if you have this extra security enabled, nobody other than you can ever access your account, even if they could have potentially stolen your password.

Can I have a GerritHub account secured independetly from GitHub ?

Not yet, however we are working on an advanced security option for the private GerritHub accounts. We will offer for a monthly extra fee:

  • Access to your GitHub private Teams and Repositories
  • Extra scripting functionality to hook Gerrit events on the server side
    (commit validation, issue tracking association, …)
  • Integration with Atlassian Jira or BugZilla
  • Integration with BuildHive from CloudBees for Continuous Integration
  • Extra enterprise account protection for GerritHub.io accounts (additional password / X.509 Certificates)

Wow, that is amazing ! When can I get GerritHub private edition ?

We are currently in public beta stage, you can start using the implemented features for FREE during the trial by logging in to GerritHub using the URL:

https://review.gerrithub.io/login?scope=scopesPrivate

Can I provide suggestions and give feedback during the public beta trial ?

Yes, you are very welcome to provide your feedback and we are very opened to adjust the development of GerritHub private features to your needs !

For problems and getting support:
http://gerritforge.com/support

For suggestions and feedback, please use the Gerrit Code Review forum:
https://groups.google.com/forum/#!forum/repo-discuss

Is GerritHub OpenSource ?

Absolutely YES: GerritHub is based on Gerrit Code Review 2.10-SNAPSHOT master with a selected set of enterprise plugins:

  • GitHub plugin
  • Codenvy plugin
  • ITS-Jira plugin
  • Scripting provider plugin
  • SingleUserGroup plugin
  • Download commands plugin
  • Replication plugin
  • Gravatar plugin
  • Review notes plugin

If you want to directly review and contribute to Gerrit, you are welcome to the developers and contributors community !

 

 

-2 days to the Gerrit User Summit 2014

The Gerrit User Summit 2014 is about to start in only 2 days: it is going to be a two days of exciting news and innovations on the world of Code Review. There are names from the largest industries in the world that have adopted the Code Review workflow in large enterprise environments: Google, SAP, SonyMobile, Ericsson, IBM, Garmin, HP, CollabNet, GerritForge, Codenvy, Eclipse Foundation and LibreOffice.

During all this week there is a special promotional discount on the Learning Gerrit Code Review book. Additionally, for the attendees of the conference, there will be a limited number of signed paperback copies available at the session “Gerrit or GitHub? Take both !”

Learning-Gerrit-Code-Review-QRCodeIn order to redeem the book promotion, scan the QR code and enter one of the following PROMO-CODEs:

Book PROMO-CODE: LGCRB20
eBook PROMO-CODE: LGCReB20

 

 

The Gerrit User Summit Agenda has been published yesterday and has a lot of very interesting talks and announcements:

Day 1 – Friday 21st of March

  • What’s new in Gerrit 2.8 (David Pursehouse – Gerrit maintainer – SonyMobile)
  • Scaling Gerrit at Ericsson (Patrick Renaud, Vladimir Cantiru, Hugo Ares – Ericsson)
  • Monitoring Gerrit (Doug Kelly – Garmin)
  • Browsing Repository Content with Gerrit’s REST API (Simon Kaegi – IBM)
  • Gerrit@LibreOffice (David Ostrovsky – LibreOffice)
  • Gerrit plugins made easy with Scripting (Luca Milanesio – GerritForge)
  • The Angular revolution in Gerrit! (Dariusz Luksza – CollabNet)

The day 1 would end with a very interesting Q&A with the Gerrit User Community about the features they would like to see coming up in the next forthcoming releases!

Day 2 – Saturday 22nd of March

  • 2014 Roadmap (Shawn Pearce – Gerrit project founder, Google)
  • Gerrit@SAP (Edwin Kempin – Gerrit Code Review maintainer – SAP)
  • Integrating CLA and Origin checks with Gerrit (Denis Roy – Eclipse Foundation)
  • Guiding Diffy to the Enterprise land (Dariusz Luksza, Eryk Szymanski – CollabNet)
  • Collaboration at Scale: The Openstack CI toolbox (Khai Do – HP)
  • Gerrit or GitHub? Take Both! (Luca Milanesio – GerritForge)
  • Diffy gets Enterprise grade (Dariusz Luksza, Eryk Szymanski – CollabNet)
  • Continuous Development with Gerrit (Tyler Jewell & Luca Milanesio – Codenvy & GerritForge)

The day 2 will end with a meet-up with food and drinks sponsored and organised by Codenvy where the Gerrit Community can discuss and exchange their post-Summit impressions and ideas on the future of Code Review.

It is going to be again a huge leap forward for the Code Review community and the Git and Gerrit projects improvement !

Continuous Development with GerritForge and Codenvy

On March 22nd, come see Codenvy CEO Tyler Jewell and Gerritforge CEO Luca Milanesio present at Google’s HQ in Mountain View, CA. They’ll cover Codenvy’s continuous delivery system for integrating code reviews, git, and SAAS developer environments in order to eliminate waste in the development workflow.

[…]

Read the full story at Codenvy Blog
[by Eric Cavazos]

More capacity and performance for GerritHub.io

We are pleased to announce that we have successfully completed a new major hardware upgrade to the GerritHub.io platform.

What has been upgraded on GerritHub.io ?

There is a brand new production cluster which provides:

  • More memory (up to 32 GBytes per node)
  • More disk-space (up to 2TBytes per node)
  • More concurrency (up to 8 CPUs per node)

The new cluster is geolocated in Germany / Bayern and provides a much better and stable bandwith.

Do I need to change anything on my client ?

GerritHub.io is accessible from the old and the new IP addresses:

  • old IP address: 94.23.71.44
  • new IP address: 148.251.77.70

The GerritHub DNS is currently propagating the new IP to gerrithub.io and review.gerrithub.io hostnames: during the propagaton time (max 24h) both IPs will provide the same access to the Germany based production cluster.

What about my Git/SSH access ?

When using Git over SSH, the remote host SSH key is exchanged and associated to the resolved remote IP into ~/.ssh/known_hosts file in your local machine. This means that you have currently associated the GerritHub.io SSH public key to the 94.23.71.44 (old IP address).

When the DNS propagation will be completed, you will see a warning from your SSH client asking to verify that the new IP address is OK. In some cases you may be asked to verify and re-accept the SSH public key.

Example of the warning you would probably see on your Git client over SSH:

Warning: Permanently added the RSA host key for IP address '[148.251.77.70]:29418' to the list of known hosts.

Double-check that the IP address shown corresponds to the new GerritHub.io cluster (148.251.77.70). This warning will be displayed only once and then the new IP will be stored in your ~/.ssh/known_hosts file.

GerritHub: code review for GitHub private repositories – early access

Support for GitHub private repositories is making substantial progress: we are proud to announce that the first milestone has been completed and is available for early access.

By using GerritHub on top of your existing GitHub private repositories, you can now define a safer set of commit policies and prevent Git forced pushes on a per-branch basis.

What is exactly GerritHub private repository support ?

With GitHub you can share code with other people and collaborate with the community of developers using public Git repositories on the Web. Your code is public by default and readable by anyone on the Web. This is the most typical case of using GitHub for the development of OpenSource projects.

However sometimes you want to restrict the access to your repository to a limited set of people or teams. Your code is not accessible to anonymous users but only the people you have selected from your GitHub Team security panel. This is typically the scenario of using GitHub for a private business or organisation.

How can GerritHub support private GitHub repositories ?

GerritHub is a public instance of Gerrit Code Review, which provides highly customisable  sofisticated security. Whilst right now all GerritHub projects have shared a common public polity for all projects, you can customise your Gerrit project security and further restrict or extends the default permissions.

What are the benefits of GerritHub on private GitHub repositories ?

By using Gerrit Code Review on top of GitHub private repositories you can improve the security, collaboration and visibility of changes in your development team:

  • Provide a common dashboard with all pending changes on a per-project basis
  • Define validation rules for code to be merged, based on quality, scoring and build validation results
  • Notify people on what is happening on the project’s code
  • Define fine-grained permissions on a per-branch basis
  • Limit collateral damage by blocking accidental force-push on release branches

How can I get early access to GerritHub for private repositories ?

GerritHub for private repositories is FREE for the initial 30 days of early access: it would then be charged at 25% of your GitHub private subscription fee. This means that starting from the 3rd of April 2014 if you are paying  $48/year on your GitHub personal plan, the GerritHub would cost only $12/year.

In order to switch to GerritHub private plan, you need to perform the following steps:

  1. Clear your browser cookies and cache
  2. Login to GerritHub.io using this url:
    https://review.gerrithub.io/login?scope=scopesPrivate
  3. Accept the GitHub modify authorisation screen: you will be requested to grant full access to your GitHub personal profile and public/private repositories
  4. Confirm your GitHub password

How can I import my private GitHub repositories ?

Once you logged in with a private scope in GerritHub, the full list of organisations and repositories are available on your import screen.

You can access the GitHub import screen by choosing the “GitHub” top-menu and “Repositories” entry,
or visit the URL https://review.gerrithub.io/plugins/github-plugin/static/repositories.html

How can I customise my private repository security on GerritHub ?

You are free to use Gerrit Code Review security configuration screen on your imported private repositories, using the “Projects” top-menu, inserting your project name on the search box and select your project. The security configuration is available on the “Access” menu. Alternatively you can access the screen directly using the URL https://review.gerrithub.io/#/admin/projects/organisation/repository,access, where organisation is your username or organisation and repository is your GitHub repository name.

Where can I find more information Gerrit Code Review security and review rules ?

Gerrit Code Review on-line documentation at https://review.gerrithub.io/Documentation/access-control.html provides a very detailed set of information useful for customising your projects security.

Alternatively if you would like a more gradual and descriptive step-by-step guide, the “Learning Gerrit Code Review” book at http://gerrithub.io/book available on Amazon provides an easy and accessible introduction to code review and security.

This is cool, but how can I provide feedback ?

GerritHub is nothing more than Gerrit Code Review plus a collection of selected plugins, including the GitHub integration plugin (see http://www.packtpub.com/article/using-gerrit-with-github). You are welcome to subscribe to the Gerrit mailing list at https://groups.google.com/d/forum/repo-discuss‎ and to the GitEnterprise blog at http://gitenterprise.me.

Comments, suggestions and hints are more than welcome !

What about Enterprise Support with guaranteed SLA on problems and incidents ?

GerritForge Enterprise Support on Gerrit Code Review covers the GerritHub cloud usage on private repositories as well. If you need guaranteed SLA you choose from one of the currently available support plans at http://gerritforge.com/support.