How to Migrate a Git Repository

When and why?

We wrote yesterday about the GitEnt-Scm.com shutdown due on April 30th, 2016. Now the issue you would be facing is: how to migrate somewhere else?
Although StackOverflow already contains over 800 response threads when asking this question we thought that giving a practical example based on a real-life GitEnt repository would allow you to avoid the trial & error discovery.

Step 1 – Mirror clone

When you want to clone a repository for the purpose of migration, you really want everything, including all the other refs that are not branches:

  • Git Tags (refs/tags/*)
  • Git Notes (refs/notes/*)
  • Gerrit Reviews (refs/changes/*)
  • Gerrit Configs (refs/meta/*)

Instead of using a standard clone, you can do a “git clone –mirror”, which implies –bare and thus does not generate a working copy.

Example:

$ git clone --mirror ssh://myuser@gitent-scm.com/git/myorg/myrepo.git
Cloning into bare repository 'myrepo.git'...
remote: Counting objects: 109, done
remote: Finding sources: 100% (109/109)
remote: Total 109 (delta 19), reused 83 (delta 19)
Receiving objects: 100% (109/109), 66.42 KiB | 0 bytes/s, done.
Resolving deltas: 100% (19/19), done.
Checking connectivity... done.

Step 2 – Create empty repo on the new Git Server

You need to have an empty target repository where to push your mirrored local clone. Note that most of the Git Servers propose you to create a first master branch with a README, but, in this case, you do not need it and it would only create more trouble in your migration path.

Example for GitHub:

– Go to https://github.com/new and create the ‘myrepo’ repository
– Do not tick any of the suggested README or LICENSE auto-generation
– Once the project is created, GitHub provides you with the repository Git URL (e.g. git@github.myorg/myrepo.git)

Step 3 – Push to the new Git Server

You are now ready to push to the target repository, and we can use the useful option “–mirror” again.
Similarly to the clone, “–mirror” automatically include all refs, including the non-branch ones (tags, notes, reviews, configs, …); it provides the behaviour of removing all the refs that are not present in your local clone. You should never use this option when you have a “regular default clone” as you would risk removing all the remote refs that have not been typically cloned with a standard default “git clone” operation.

Example for GitHub:

$ git push --mirror git@github.myorg/myrepo.git
Counting objects: 109, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (61/61), done.
Writing objects: 100% (109/109), 66.42 KiB | 0 bytes/s, done.
Total 109 (delta 19), reused 109 (delta 19)
To git@github.myorg/myrepo.git
* [new branch] refs/changes/02/802/1 -> refs/changes/02/802/1
* [new branch] refs/changes/03/803/1 -> refs/changes/03/803/1
* [new branch] master -> master
* [new branch] refs/meta/config -> refs/meta/config

Step 4 – Import into GerritHub.io (Optional)

Your repository has not been fully migrated to your new target server. If you wish now to keep on using Gerrit Code Review for your Development Workflow, you can link your repository to Gerrit using GerritHub.io

The YouTube Video explains how to perform this last operation using GerritHub.io import Wizard.

Need more help?

Do you require more help? Contact our Sales Departement at sales@gerritforge.com and we will provide the extra support you need or perform the migration for you to GerritHub.io.

GitEnt-scm.com Farewell

An open letter to all GitEnt-Scm.com users

it has been a fantastic journey to launch and see the GitEnterprise service growing over the past five years.
We announced the availability in 2011 of a new Enterprise-grade service ahead of other major competitors such as CollabNet or Atlassian. We were the only real Enterprise-Ready Git service much more advanced than GitHub and well before the birth of GitHub:Enterprise.

Since then, over 5000 people used and loved our service and enjoyed a fully FREE and compelling Git server, powered by Gerrit Code Review, the major OpenSource platform code for Code Review on Git.
We are grateful for your trust and confidence in us.

From premium service to commodity

Times have changed, what was considered a premium had become a commodity and services like BitBucket started to erode our take up in the past three years. We moved on to a different and more compelling level of services, jumping again on the edge of innovation and moving into Code Review and its integration with the Continuous Delivery pipeline. We launched in 2013 a brand-new service called GerritHub.io which is now the reference point for major OpenSource and Commercial organisations such as IBM, Cisco System, RedHat and Rackspace.

We continued to maintain both GitEnterprise and GerritHub.io so that you did not have to face any migration or disruption; however the audience of GitEnterprise has become so marginal that we have unfortunately decided to shut down the service within the next 30 calendar days.

The choice: Red or Blue pill?

You have two options, either stay on the cutting edge technology and jump to GerritHub.io or moving to a free commodity service.

Option 1 => migrate to GerritHub.io

Option 2 => moving to another Git provider, such as BitBucket or GitLab.

If you decide to go for Option 1, we invite you to watch the GerritHub.io video on YouTube  and decide whether you would like to start adopting Gerrit Code Review workflow, bearing in mind that it may actually change the way you interact and manage your Git repositories.

Should you need our help in migrating your repositories, we can offer our bolt-on support services at a 10% discounted rate. See www.gerritforge.com/pricing for all the options available and costs involved.

Time is running fast: ACT NOW !

You do need to take a decision before the 30th of April 2016, as after that date the GitEnterprise.com and GitEnt-SCM.com will just redirect to our GerritForge Website and your repositories will not be accessible anymore.

Thank you again for those five fantastic years and for believing in us.
We hope you will decide to continue your journey with us.

Should you have any doubts, please do not hesitate to come back us.

The GerritForge Support Team.

Gerrit Upgraded with No Downtime

Screen Shot 2016-03-21 at 20.48.12

Zero DownTime success story.

From today at 08:06 GMT GerritHub users are served by our brand new infrastructure geo-located in Canada, Quebec, Beauharnois. It is the first time we applied a zero-downtime roll-out scheme, the PingDom uptime for the past 24h reported 100% uptime and 688 msec average response time for the page of the list of opened changes. The two response times spike on the above graphs are actually due to the old German infrastructure and happened before the start of the roll-out.

We can see the switch of the traffic to the new infrastructure from the increase of the overall response time (IP packets were routed from Germany to Canada causing extra hops); as the DNS propagation was spreading across the world, the overall number of hops gradually came back to normal.

Timeline of the events.

  • 08:00:00 GMT – Phase 1 – Set Gerrit READ-ONLY. All changes and Git repositories started to refuse push and updates.
  • 08:00:01 GMT – Phase 2 – Wait for pending replication to complete. Replication queue was empty; there was no need to wait.
  • 08:00:02 GMT – Phase 3 – Mirror DB and Git for the last time, delta-reindex, DB upgrade and Gerrit restart. It has been the longest part of the roll-out and lasted 5′ 32”, aligned with our estimates.
  • 08:05:34 GMT – Phase 4 – Cache warm-up. 20K projects, 8K accounts and 4.6K groups were pre-loaded in Gerrit. This step was optional but allowed us to redirect all the traffic without risks of causing thread spikes on the new infrastructure.
  • 08:06:23 GMT – Phase 5 – Redirect traffic to the new infrastructure.

Did anybody notice the rollout?

During the rollout the Git projects and Gerrit changes were read-only for 6′ and 23”. According to the logs, 493 Git/HTTP and 172 Git/SSH invocations were made and completed successfully: none of them failed.

What is the situation right now?

The new infrastructure public IP (192.99.233.76) has almost completed his DNS propagation around the world, the only countries not entirely covered are Australia and China. The rest of the world is coming directly to Canada avoid the German hops. Metrics are good, low CPU utilization and threads consumption compared to the old German infrastructure, symptom of the reduction of the execution and serving times and latency.

What’s next?

From now on we will continue to use this Blue/Green roll-out strategy, possibly improving in the ReadOnly window by introducing live distributed reindex and cache warm-up.

We fully commit to Zero-Downtime and Stability, the most valuable assets for our clients.

GerritHub and Zero-Downtime Upgrade

GerritHub gets bigger on Mon, 21 March 08:00 GMT


GerritHub has experienced unprecedented growth over the past two years. The November 2015 numbers presented at the Google User Summit in Mountain View – CA have been surpassed again, and we do need to make sure that our infrastructure is still capable of dealing with current and future users’ needs.

What is changing in GerritHub.io?

We are changing everything, from the version of Gerrit to the hardware, network and storage infrastructure. Data, DBMS, Indexes and cache, need to be upgraded and refreshed to make sure that the new systems are reflecting exacting the current production data and sessions.
We are changing as well the geo-location of our servers, from the current server farm in Germany (Bayern, Nuremberg – 100 MBps) to a new server farm in Canada (Quebec, Beauharnois – 1 GBps).

Why have so many changes?

We started to measure some significant delay in the Git and review operations on the old infrastructure, mainly due to three factors:

  1. More users, more repositories, more concurrency. Individuals, OpenSource projects and Businesses started using GerritHub.io for their mission-critical repositories, considering Gerrit the “source of truth” of their review workflow. We needed more horsepower, memory, storage and ability to scale even further.
  2. Bandwidth from USA and Far-east. The majority of people using GerritHub.io are from the other side of the Atlantic Ocean: this is typically not a problem from 7 AM to 3 PM … but after 4 PM the connectivity between Europe and the Americas becomes slow. Additionally, people using GerritHub.io from India, Japan, Australia and New Zealand experienced terrible slowdowns because of the excessive number of hops to reach Germany.
  3. Gerrit master is much faster. Based on the current data and metrics measured on GerritHub.io, we have contributed a lot of patches to reduce the overhead caused by Gerrit DB and lessen the number of SQL queries per minute. All those new improvements are on Gerrit master, and we need to catch-up with the “latest and greatest” version.

Will I experience any GerritHub.io outage?

Last time that GitHub needed to make a major upgrade, asked his 5M users to stop working for 23 minutes,. This translates to a loss of two millions of hours of continuous delivery lifecycle, equivalent to over 130 man/years, worth no less than eight millions dollars.
We are going to adopt a new Zero-Downtime Gerrit roll-out strategy to make sure that all those changes are not going to impact your day-by-day activity. If you were not reading this post you would possibly even not notice the “switch” from the old to the new infrastructure, apart from the increase in speed and bandwidth.

Zero-downtime GerritHub.io migration, step by step with the associated expected timings.

Phase 0 – Replication to the new Gerrit infrastructure. (- 1 month ago)
We started migrating everything one month ago, and the old and new infrastructure are working side-by-side, thanks to Gerrit master-slave replication. The new Gerrit servers are active as slaves and are read-only.

Phase 1 – Migration kick-off. (08:00 GMT)
We install a Gerrit plugin that rejects all the push to GerritHub.io repositories providing a courtesy message: “Gerrit is under maintenance, all projects are READ ONLY”. All the HTTP POST, PUT and DELETE are disable on the Gerrit REST-API.

Phase 2 – Wait for replication events to complete and migrate DB. (08:02 GMT)
Git repositories are continuously replicated, but we do need to make sure that the event queue is empty. Once that happens we schedule the last final DB migration to the new infrastructure.

Phase 3 – Gerrit DB upgrade and reindex (08:04 GMT)
New Gerrit server executes the final upgrade and off-line reindex of the latest received changes.

Phase 4 – Gerrit start-up and cache warm-up (08:05 GMT)
New Gerrit is restarted and the most critical Gerrit caches (projects, accounts and groups) are pre-loaded in memory. This allows the incoming traffic spike to avoid the collapse of used threads and makes the transition as smooth as possible without slowdowns.

Phase 5 – Traffic switch and DNS updates (08:06 GMT)
GerritHub.io redirects all incoming HTTPS and SSH traffic to the new infrastructure. Git pushes and HTTP PUT, POST and DELETE operations of the REST API are operational again and served by the new Gerrit infrastructure. GerritHub.io DNS is updated to the new Canadian IPs.

Phase 6 – New IPs gets propagate to all worldwide DNSs (+ 1 day)
Once all the DNSs in the world would have been updated, everyone will start going directly to the new infrastructure without further hops or redirection from Germany. Customers from USA, Canada, South America, Asia, Japan, New Zealand and Australia should see a significant reduction of the network latency and increase of GerritHub.io responsiveness.

Firewall and SSH considerations

Even if Gerrit server’s SSH key is not changing, some of you may see a warning similar to this when they push or pull over SSH:

Warning: the RSA host key for ‘review.gerrithub.io’ differs from the key for the IP address ‘148.251.77.70’

The warning message will also tell you which lines in your ~/.ssh/known_hosts need to change. Open that file in your favorite editor, remove or comment out those lines, then retry your push or pull.

Should your network have some strict firewall rules to access external sites, you may want to whitelist the IP of the new infrastructure WLB to: 192.99.233.76.

Follow GerritHub.io migration progress.

We will advertise the migration progress on Twitter at @GitEnterprise. Should you have any issue you can tweet us or contact GerritForge Customer Support at support@gerritforge.com.

GerritForge helps Gerrit 3.0 stability

Gerrit 3.0 plan announced: we need stabilisation now

Screen Shot 2015-11-23 at 09.52.19

Gerrit 3.0 plan and its NoteDB reviews have been officially announced at the Gerrit User Summit 2015. It is already available as an experimental feature in the current Gerrit master but it needs much more stability in order to be officially supported for production.
GerritForge decided to help and reuse its existing continuous integration system to validate every Gerrit patch set against the current and the new NoteDB review persistence back-end in order to avoid regressions during the 2.13 and 3.0 development

Pre-commit validation by GerritForge CI

If you have posted a patch to gerrit-review.googlesource.com in November, you may have hopefully received a Verified+1 from a strange user with a Diffy logo on the side.
GerritForge’s provided CI on gerrit-ci.gerritforge.com fetches automatically every patch-set pushed to gerrit-review.googlesource.com and triggers a slightly modified Gerrit build with the purpose of checking whether the code change introduces a regression or not. This may seem at first sight a quite normal Gerrit to Jenkins job integration, however implementing it on top of Google’s multi-master replicated installation was not a piece of cake.

Gerrit Trigger plugin limitations on multi-master setups

Jenkins has already an out-of-the-box integration with Gerrit provided by the Gerrit Trigger plugin maintained by Robert Sandell – Cloudbees. It leverages the Gerrit stream events through an SSH channel and make use of Gerrit REST API to action them according to the build result.
The Google’s Gerrit setup, however, is not a trivial one-node installation and is further limited by the security constraints of the Google infrastructure, which does not allow any incoming SSH connectivity.
Additionally all concept of “getting the events in a stream” isn’t going to work when events can come concurrently from multiple places at the same time: who is going to define the “global ordering” and how to put all those events in a single TCP/IP Socket? Even UDP would not work in this case because SSH channel requires confidentiality between two and only two peers.

Alternatives to SSH

During the hackathon, other approaches have been discussed by Shawn Pearce, including the use of HTTP WebSockets (or Cometd) for fetching events without the need of an SSH connection. Events are still distributed and generated by multiple masters all the time, and the Jenkins plugin would then have the onus of contacting all the Gerrit servers and keep a connection opened to all of them. This is clearly not going to work because the number of servers, their IPs and locations may change at any time and the solution would eventually be in danger of losing precious events.

Back to polling

The only solution we envisaged was to fall back to a polling logic where Jenkins ever 10 minutes is asking Gerrit “what’s new since last time we spoke?”. This solution goes against the main reason the Gerrit Trigger plugin was designed: avoiding SCM polling. It is, however, a much better and optimised polling strategy and let’s see why.

Query and then fetch

The typical Git SCM polling relies on fetching all references every poll interval and detect if new Git commits are available. This is notably slow and generates a huge overhead on the Git server. The approach we took is quite different and makes use of the Gerrit search capabilities that are way faster and more powerful than a simple Git fetch.
Jenkins first ask Gerrit the list of changes and associated commit-IDs involved in any event since the last polling time: the result may include patchsets that have been already built to avoid having any gaps between polling intervals. The search is fast and implemented in … you know, Google is a search company isn’t it?
Once the list of candidate commit-ids is identified, Jenkins goes through all of them and checks using the Gerrit REST-API:
– has it been build during my previous execution?
– has it been already accepted (or rejected) by me?
The Commit-IDs that results as not being checked before and not yet validated are then used to trigger a specific job parametrised on:
– Specific branch
– Specific change ref-spect
Fetching is performed avoiding any wildcard and the corresponding load on the Git server is minimum. Fetch (Git protocol) + build (using Buck) + test (unit + integration) + review feedback (REST API) is taking an average of 5 minutes, which is an amazing result if you consider the size of the Gerrit project and the typical slow speed of a default Jenkins Git fetch.

The bottom line

Using the query + fetch approach, which seemed a bit slow and old-fashioned at the beginning, was eventually very simple and successful. Instead of setting up SSH hostkey verification, key exchange and ad-hoc channels, the only configuration needed is a valid Gerrit user and the HTTPS endpoint URL, the same used for cloning the code.
The solution is much more reliable as SSH channels are notably unstable and consume server threads. The only drawback is the slight delay between the patch-set upload the start of the build (at max 10 minutes) which is acceptable in most cases.
Results
Since its roll-out more than 1200 patches have been checked and rated, a lot of potentially Gerrit regression avoided and more importantly we have prevented the NoteDB code to start diverging regarding stability from the current mainstream development.

How can re-trigger validation for a single change?

We have enabled anyone to trigger ad-hoc executions of the Gerrit validation flow using the following URL:
https://gerrit-ci.gerritforge.com/job/Gerrit-verifier-flow/build
This is a standard Jenkins parametrized build that request the change-id to be built, as either SHA1 or number. Once the job is triggered the build will be executed and the validation feedback applied to your change, regardless of the previous build or validation status.

Gerrit User Summit & Hackathon 2015

Gerrit User Summit at Google Mountain View – CA

Gerrit user Summit 2015

Exciting times this year at the Gerrit User Summit and Hackathon 2015: the major contributors and players of the Gerrit community shared experiencing, opinions and news in an intense 7-days event in Google – Mountain View – CA.

The User Summit

The first two days have seen the User Summit at the centre of the stage: scalability, scalability and scalability have been the Leitmotif of the discussion.
GerritHub.io started the saga with the astonishing numbers of the two years growth:

  • 6.5K users (+580%)
  • 41K changes (+1700%)
  • 16K projects (+530%)
  • 500GBytes (+500%)

The two years of live production experience with users coming from GitHub have highlighted problems (replication, repositories sharding, multi-master) and possible solutions, ranging from Gerrit Virtual Private Hosting to the on-premises deployment integrated with BitBucket and GitHub:Enterprise.
Ericsson continued with a very detailed description of how Gerrit is used as “Enterprise Washing Machine” of all code that goes through the development pipeline: scalability and control are the fundamental keywords that were repeatedly mentioned and enforced.
The replication across sites have been massively improved regarding performance and stability since the introduction of Git/HTTPS as the main protocol for replication, as previously advised by GerritForge in 2014.
The first day ended with Google smashing out everyone his hypersonic numbers:

  • 1.6M changes / 3.8M patch-sets
  • 240 Gerrit virtual nodes
  • 2.5M repositories

The second day was all projected on the future of Gerrit, with three very interesting features coming soon.

Gerrit 3.0 and NoteDB

Dave Borowitz presented the status of the replacement of Gerrit DBMS
with a 100% open standard solution based on commit notes, implemented by all OpenSource and Commercial “flavours” of Git. The solution will allow interoperability with other code-review implementations (e.g. Phabricator) and fully enable reviews replication and off-line operations.
The new DBMS-free version of Gerrit will be called the Ver. 3.0 and it will be the next version after current Gerrit master (Ver. 2.13) gets released. released.

Gerrit PluginManager: one plugin to rule them all

Luca Milanesio presented a new vision on how to discover and install plugins on the Gerrit platform: Gerrit PluginManager. We should not bundle more and more plugins with Gerrit, which eventually would lead to an explosion of the WAR file-size. You can now install only a “plugin-manager” which then guide you through the “one-click” set-up of all the others.
Differently from similar solutions such as Jenkins Update Centre, the Gerrit PluginManager is based on the live status of the plugins and their compatibility with Gerrit: as soon as one plugin gets patched and successfully compiled with a version of Gerrit, it will automatically be listed and made available by the PluginManager. Additionally GerritForge will provide a list of certified and guaranteed plugins that have been successfully tested with Gerrit.

PolyGerrit, the new web-component UX for Gerrit.

Andrew Bonventre is the Googler that is driving the transformation of Gerrit UX to the new Polymer platform, however for personal reasons he was not able to attend the Summit. Dave Borowitz replaced him and unveiled that will be the (very) near future of Gerrit UX: no more GWT and complex and black magic transforming Java in obscure JavaScript, the future is coming through web-components, a new emerging and promising standard for the HTML and modern web-browsers.
Despite the UX being at very early stages, Dave was able to showcase a fully functional list of changes and search box powered by Polymer and calling Gerrit REST-API, really fast and promising!

Gertty, the text-only Gerrit Code Review.

On the complete opposite side, why not using Gerrit from an 80×25 text-only console? Gertty is an astonishing 100% char-based console, all based on Gerrit REST-Api and a local SqLite local DB for caching changes. Allows complete off-line operations and synchronisation with Gerrit changes. Productive and effective while you are on the go.

CollabNet and Gerrit tuning cheat-sheet

CollabNet presented a useful four pages brochure to guide through the tuning of a Gerrit set-up for a small, medium or large installation. Based on their experience of running TeamForge SCM, the commercial fork of Gerrit Code Review based on their existing TeamForge ALM proprietary solution, they have been able to experiment the Gerrit default settings and learn how to adjust them to leverage the full power of your setup.
The audience appreciated the effort and encouraged CollabNet to post all the findings as Gerrit reviews to get the code-base better and improve the default settings of the set-up.

Gerrit and Jenkins Workflow dance together with Docker.

Cloudbees presented a very effective demo on how to use Gerrit and Jenkins Workflow plugin to implement a real-life Continuous Delivery Pipeline. The presentation leveraged the use of Docker images as previously presented by Stefano Galarraga in his “Gerrit and Jenkins Continuous Delivery Pipeline for BigData” talk. They both explained and showed how Code Review is key in implementing a successful and smooth code validation and roll-out. Stefano’s presentation made use of the new exciting feature of “topic submission” that enables the grouping and commit of multiple changes across repositories.

The Hackathon, topics and improvements.

Gerrit Hackathon 2015

A lot of code have been pushed during the hackathon:

  • 400 changes merged
  • 3 Gerrit version released

Gerrit metrics

Surely the hot-topic of the hackathon has been the introduction of a new pluggable metrics engine in Gerrit, currently half-merged in master branch. Gustaf demoed on how now is possible to use standard tools such as Graphite and JMX console to extract, display and graph the most relevant Gerrit metrics in real-time. This is similar to what the JavaMelody plugin was providing, with the added value of getting the data outside the Gerrit JVM and analyse with greater detail and a standard monitoring platform.

PolyGerrit

Gerrit master has been officially upgraded to allow the development of the new Polymer-based UX for Gerrit, code-named “PolyGerrit”. Gerrit master will need from now on the installation of NodeJS during development. This is needed for building and packaging the “vulcanised” version of Gerrit UX which contains the basic components of the user interface. At the moment the only thing that will be visible is the demo of list of changes presented by Dave Borowitz at the User Summit, however new changes are coming over and Google announced that they are targeting Q4-2015 for a first internal release of the new UX.

GerritForge CI Verifier

As Gerrit 3.0 will be completely revamped in terms of reviews persistence, the community pushed for having a stricter changes validation on both the old DBMS and new NoteDB based persistence. GerritForge extended the use of the CI system (https://gerrit-ci.gerritforge.com) to cover the validation of every change / patch-set that will be uploaded to Gerrit from now on. This is a substantial improvement on the Code Review workflow of Gerrit itself and will hopefully contribute to a stable and solid Ver. 3.0 release next year.

Externalisation of Gerrit Hooks and Events to plugins

Qualcomm worked at completing the externalisation of Gerrit hooks and stream events into plugins. This change will allow to plug different events providers depending on the type of Gerrit set-up, single node or multi-master. One more important step towards an OpenSource implementation of Gerrit Multi-Master.

GerritHub user-controlled GitHub Scopes

Nowadays people are very careful about privacy and user data: nobody grants access to their profile without checking first the possible consequences.
We want to give the user the ability always to know and control what level of access is given to their data: that’s why we improved the way you login in GerritHub.io.

GitHub scopes: what is it?

GitHub provides the authentication and access to user’s profile using a protocol called OAuth 2.0. When GerritHub is requesting a user to authenticate is then granted a set of permissions to operate on behalf of the user on their GitHub resources, which include:

  • User’s personal data (name, e-mails)
  • User’s membership to organisations and teams
  • User’s repositories

The set of permissions to access and operate on your data is also known as “Scope” in GitHub terms.

How is GerritHub helping me to control my access?

GerritHub has from today a new “Scope Selection” screen with two main objectives:

  1. Displaying your current scope and associated rights GerritHub has on your GitHub profile
  2. Giving you the ability to switch to a different “Scope” and consequently the rights that GerritHub has on your profile data

Screen Shot 2015-10-08 at 10.16.53

Transparency is good, but what is the practical added value?

There has been in the past a common complaint about GerritHub having too much or too little access to your GitHub profile:

  • Too much? Why GerritHub.io needs access to my e-mail address? Why does GerritHub need to see my public keys?
  • Too little? Why does GerritHub not show my private repositories in the import screen? How can I see my organisation membership in GerritHub project security screen?

With the ability now to visualise and change the current “Scope”, people can be more aware of why things are not showing up. They can make conscious decisions about how to change them with full transparency on the associated implications.

A common scenario: importing and accessing private GitHub Organisations, Teams and Repositories.

When you need to import an existing private GitHub project, you need to access information that is not publicly available:

  • Your membership to a private organisation
  • Your ownership of a Team structure
  • Ability to clone and push your private organisations’ repositories

There is now a special information box suggesting that you have the ability to change your “Scope” if you don’t see the organisations and repositories you want to import.

Screen Shot 2015-10-08 at 10.22.12

After changing the scope, you can then log in again and you will have an improved set of options to get more data and repositories from your GitHub account.

Like it? Will you use it on a daily basis?

We are eager to get your feedback on this new feature: Tell us what you think and let us know what you would change or add to the set of “Scope” permissions.

Gerrit Code Review and Jenkins Continuous Delivery Pipeline on BigData

Gerrit at the Jenkins User Conference 2015 – London

For the very first time, CloudBees organised a full User Conference in London and we have been very pleased to speak to present a real-life case-study of Continuous Integration and Continuous Delivery applied to a large-scale BigData Project.

See below a summary of the overall presentation published on the above YouTube video.

The trap of the BigData production phase

BigData has been historically used by data scientists in order to analyse data and extract  features that are relevant for the business. This has typically been a very interactive process happing mostly on “notebook-style” environments where almost everything, from ad-hoc queries and graphs, could have been edited and executed interactively. This early stage of the process is typically known as “exploration” or “prototype analysis” phase. Sometimes last only a few days but often is used as day-by-day modus operandi.

However when the exploration phase is over, projects needed to be rewritten or adapted using a programming language (Scala, Python or Java) and transformations and aggregations expressed in jobs. During the “production-isation” phase code needs to be properly written and tested to be suitable for production.

Many projects fall into the trap of reducing the “production phase” to a mere translation of notebooks (or spreadsheets) into Scala, Java or Python code, relying only on the manual analysis of the resulting data as unique testing methodology. The lack of software engineering practices generates complex monolithic code,  difficult to maintain, to understand and thus to validate: the agility of the initial “exploration” phase was then miserably lost in the translation into production code.

Why Continuous Delivery on BigData?

We have approached the development of BigData projects in a radically different way: instead of simply relying on existing tools, often not enough for setting up a proper Agile Delivery Pipeline, we introduced brand-new frameworks and applied them to the building blocks of a Continuous Delivery pipeline.

This is how Stefano Galarraga wrote started the ScaldingUnit project, aimed in de-composing the development of complex Scalding MapReduce jobs in simple and testable units.

We started then to benefit from the improved Agility and speed of delivery, giving constant feedback to data-scientists and delivering constant value to the Business stakeholders during the production phase. The talk presented at the Jenkins User Conference 2015 is smaller-scale show-case of the pipeline we created for our large clients.

Continuous Delivery Pipeline Building Blocks

In order to build a robust continuous delivery pipeline, we do need a robust code-base to start with: seems a bit obvious but is often forgotten. The only way to create a stable code-base,  collectively developed and shared across different [distributed] Teams, is to adopt a robust code review lifecycle.

Gerrit Code Review is the most robust and scalable collaboration system that allows distributed teams to submit their changes and provide valuable feedback about the building blocks of the BigData solution. Data scientists can participate as well during the early stage of the production code development, giving suggestions and insight on the solution whilst is still in progress.

Docker provided the pipeline with the ability to define a set of “standard disposable systems” to host the real-life components of the target runtime, from Oracle to a BigData CDH Cluster.

Jenkins Continuous Integration is the glue that allowed coordinating all the different actors of the pipeline, activating the builds based on the stream events received from Gerrit Code Review and orchestrating the activation of the integration test environments on Docker.

Mesos and Marathon managed all the physical resources to allow a balanced allocation of all the Docker containers across the cluster. Everything has been managed through Mesos / Marathon, including the Gerrit and Jenkins services.

Pipeline flow – Pushing a new change to Gerrit Code Review

The BigData pipeline starts when a new piece of code is changed on the local development environment. Typically developers test local changes using the IDE and the Hadoop “local mode” which allows the local machine to “simulate” the behaviour of the runtime cluster.

The local mode testing is typically good enough for running unit-tests but often is unable to detect problems (e.g. non-serialisable objects, compression, performance) that are likely to appear in the target BigData cluster only. Allowing to push a code change to a target branch without having tested on a real cluster represent a potential risk of breaking the continuous delivery pipeline.

Gerrit Code Review allows the change to be committed and pushed to the Server repository and built on Jenkins Continuous Integration before the code is actually merged into the master branch (pre-commit validation).

Pipeline flow – Build and Unit-tests execution

Jenkins uses the Gerrit Trigger Plugin to fetch the code currently under review (which is not on master but on an open change) and triggers the standard Scala SBT build. This phase is typically very fast and takes only a few seconds to complete and provide the first validation feedback to Gerrit Code Review (Verified +1).

Until now we haven’t done anything special of different than a normal git-flow based continuous integration: we pushed our code and we got it validated in Jenkins before merging it to master. You could actually implement the pipeline until this point using GitHub Pull Requests or similar.

Pipeline flow – Integration test automation with a real BigData Cloudera CDH Cluster

Instead of considering the change “good enough” after a unit-test validation phase and then automatically merging it, we wanted to go through a further validation on a real cluster. We have completely automated the provisioning of a fully featured Cloudera CDH BigData cluster for running our change under review with the real Hadoop components.

In a typical pipeline, integration tests in a BigData Cluster are executed *after* the code is merged, mainly because of the intrinsic latencies associated to the provisioning of a proper reproducible integration environment. How then to speed-up the integration phase without necessarily blocking the development of new features?

We introduced Docker with Mesos / Marathon to have a much more flexible and intelligent management of the virtual resources: without having to virtualise the Hardware we were able to spawn new Docker instances in seconds instead of minutes ! Additionally the provisioning was coordinated by the Docker Build Step Jenkins plugin to allow the orchestration of the integration tests execution and the feedback on Gerrit Code Review.

Whenever an integration test phase succeeded or failed, Jenkins would have then submitted an “Integrated +1/-1” feedback to the original Gerrit Code Review change that triggered the test.

Pipeline flow – Change submission and release

When a change has received the Verified+1 (build + unit-tests successful) and Integrated+1 (integration-tests successful) is definitely ready to be reviewed and submitted to the master branch. The additional commit triggers the final release build that tags the code and uploads it to Nexus ready to be elected for production.

Pipeline flow – Rollout to production

The decision to rollout to production with a new change is typically enabled by a continuous delivery pipeline but manually operated by the Business stakeholders. Even though we could *potentially* rollout every change, we did not want *necessarily* do that because of the associated business implications.

Our approach was then to publish to Nexus all the potential *candidates* to production and roll-them-out to a pre-production environment, ready to be assessed by Data-Scientists and Business in real-time. The daily job scheduler had a configuration parameter that simply allowed to “pointing” to the version of the code to run every day. In this way whatever is deployed to Nexus is potentially fully working in production and rollout or rollback a release is just a matter of changing a label in the daily job scheduler.

Summary

Building a Continuous Delivery Pipeline for BigData has been a lot of fun and improved the agility of the Business in rolling out changes more quickly without having to compromise on features or stability.

When using a traditional Continuous Integration pipeline, the different stages (build + unit-test, integration-tests, system-tests, rollout) are all happening on the target branch causing it to be amber or red at times: whenever tests are failing the pipeline need to be restarted from start and people are blocked.

By adopting a Code Review-driven Continuous Integration Pipeline we managed to get the best of both worlds, avoiding feature branches but still keeping the ability to validate the code at each stage of the pipeline and reporting it back to the original change and the associated developer without to compromise the stability of the target branch or introducing artificial and distracting feature branches.

Resources

The slides of the talk are published on SlideShare.

All the docker images used during the presentation are available on GitHub:

GitHub API change causes problems to Jenkins and Gerrit

GitHub has recently changed his API default permissions and has caused big problems and outages to Jenkins or Gerrit instance configured for with OAuth 2.0 authentication.

GerritHub.io unfortunately has been impacted and this has caused two outages today:

  1. 0:40 – 1:10 CEST (GitHub API error temporary overload – automatically resolved)
  2. 10:50 – 11:25 CEST (GitHub API error overload causing the slowdown of HTTP calls and subsequent exhaust of our DBMS connection pooling)

The second outage was more serious as the GitHub API problems happened exactly at peak hours for European customers.

What is the current situation?

We have added the extra “read:org” Scope permissions to the default public access to GerritHub.io in order to prevent the GitHub API from failing. This change requires you to logout and login back to GerritHub.io to approve the extra permission flag.

IMPORTANT NOTE: Previous authenticated sessions are not valid anymore (batch users for Jenkins Jobs) for reading your GitHub organisation ownership and, as a consequence, your Gerrit permissions cannot be fully evaluated. You need to login on behalf of the batch users to GerritHub.io and accept the new GitHub permissions in order to get the new valid OAuth token.

The system is back up-and-running but is slower than usual, due to the extra throttling applied by GitHub cause by the error overload. As people will start logging in again and approving the new permissions, the error rate should drop and the situation will come back to normal.

What if I still have problems after having logged in and approved the read:org permissions?

In case of any further issues, please contact GerritForge Support:
www.gerritforge.com/support

[EDIT: 17:53 BST]

We have been monitoring the situation during the day but the performance of the system was not recovering as quickly as we wanted. The problem was related to the batch users that were still running in background using OAuth tokens not authorised anymore to perform their actions.

One user from RedHat pointed out:

“You can see it triggered job and the Build results is SUCCESS. But there is no votes or verified status.”

This was caused by the batch user (configured in this case on Jenkins) was still authenticated through its old OAuth token but not authorised anymore to provide the “Verified” status. Batch users are typically not using the GUI and so have not a lot of chances of getting a renewed OAuth token with the correct permissions.

Current situation: workaround in place.

The OAuth Scope problem was only impacting those users associated to a public GitHub plan and thus using the default scopes user:email + public:repo. All the other users associated to a private GitHub plan had already granted access to all private information, including the full list of their public and private organisations.

The workaround in place uses the weakest link of the chain applied to the GitHub’s protection of the user’s organisations memberships:

  • A logged in with scopes [user:email + public:repo] cannot access its own list of organisations (strongest link).
  • The same user can however open a web browser and navigate, even without being authenticated, the URL https://github.com/username and extract the list of organisations on the bottom-left of the page under the H3 tag “Organizations” (weakest link)

The latest patch applied later today just apply this principle using the weakest link (page-scraping with anonymous HTTP-GET) as compensation of the failure to overcome the strongest link.

NOTE: The workaround allows to fill-up the Gerrit cache and gradually eliminates the GitHub throttling on the failed API calls. It allows the service to come back much more quickly to the expected normal response times. You are better anyway to authenticate to GerritHub.io interactively in order to get a renewed OAuth token as hopefully the workaround won’t be necessary anymore in the next few days.