2023: New Year and opportunities for GerritForge and Gerrit Code Review

TL;DR: GerritForge has been dedicating its efforts to organising and managing the Gerrit User Summit in London back in November 2022, in conjunction with the release of Gerrit v3.7. The event has been a great success, with a significant presence on-site and record-breaking attendees on the GerritForge TV youtube channel. It has also committed to its promises to research and improve the JGit and Gerrit scalability to large mono-repos, with tens of millions of objects and refs. 2023 will see the finalisation of these efforts with an increase in development efforts and a new JGit Committer for pushing the platform to a new level of performance and scalability and a new innovating system for collecting and optimising the repository metrics automatically. Stay tuned.

Read the full story here below (9 mins read).


2022 has been a critical year for turning the Gerrit Code Review community and development back on track after the COVID-19 pandemic. At GerritForge, we’ve been working hard to make sure that the development, support, and innovation of Gerrit Code Review continue on its main objectives.

Gerrit Code Review v3.6 and v3.7

We have continued to deliver on the development and release of Gerrit Code Review and its plugins, helping the testing and releasing of versions v3.6.0 (May) and v3.7.0 (November).

Some numbers of the past 12 months’ development contributions by individual committers and companies:

  • 3,627 Changes have been merged on 76 projects related to the Gerrit Code Review platform, including JGit
  • 113 committers from 42 different organisations

A special mention to the top #10 contributors: Google (Ben Rohlfs, Edwin Kempin, Chris Pouchet, Dhruv Srivastava, Frank Borden, Milutin Kristofic), GerritForge (Luca Milanesio), Wikimedia (Paladox) and SAP (Matthias Sohn and Thomas Dräbing).

In comparison with 2021, we had 25% fewer changes merged but with more contributors coming from more companies, which is a symptom to a very healthy and thriving ecosystem of maintainers.

GerritForge has committed to resuming the face-to-face user summits, which were suspended since 2020.

The Gerrit User Summit 2022 took place in London, UK the 10-11 of November in a hybrid format, with people having the opportunity to participate either on-site or remotely on GerritForge’s YouTube TV channel.

It was a glorious success, with record-breaking attendance from all around the globe:

  • 50 people registered to attend on-site, 26 of them managed to arrive despite the London tube strike, whilst the others attended remotely
  • 235 people viewed the summit on YouTube with an average view time of 40 mins (one talk)

The summit survey had an outstanding report showing a huge acceptance and appreciation of the event:

  • 82% rated the remote video streaming as “good” or “outstanding”
  • 96% rated the quality of the summit as “good” or “outstanding.”
  • 100% would recommend the summit to a colleague, with 83% strongly recommending it

GerritHub.io SLA gets closer to five-nines.

We have been working hard to make Gerrit more stable and resilient throughout 2022, discovering and fixing many issues in the code base and on the multi-site software architecture.
In 2022, GerritHub.io had only six small hiccups for a total of 19 mins of downtime (SLA = 99.997%) over a 12-month period, a 75% reliability improvement compared to 2021.

We have run extensive RCAs on the causes of the downtime and identified two leading issues, which are explained in the details below.

The “anonymous unlimited query” hole in Gerrit
GerritHub.io has been subject to a 15 mins outage because of anonymous users being able to bring offline all the sites before the system could auto-recover.
Gerrit allows bypassing of all limits set in the ACLs for running queries by simply adding the “no-limit” parameter.
Returning an arbitrary payload without limits could allow a single user to generate a server-side workload for collecting and building a GBytes-sized JSON payload; unfortunately, that option was available to everyone, including anonymous users making any publicly faced Gerrit Code Review installation subject to deny-of-service attacks.
We have identified the issue, reported and fixed it in Gerrit with Change 333304, which has been included in Gerrit v3.3.10, v3.4.4, v3.5.1, and all v3.6.0 or later releases.

More granular monitoring and alerting
We have lowered the threshold of uptime checks on GerritHub.io to 1 minute, giving us the ability to detect and react immediately to 4 smaller hiccups. We have detected a lack of scalability for some specific higher-load projects. Those hiccups have been responsible for 2 mins of downtime over the 2nd part of 2022. Many more projects are also planning to be onboarded on GerritHub.io; hence we do need to address this project-specific capacity needs.

Scaling Gerrit Code Review and JGit beyond its limits

We have been investing a massive effort in building a test environment designed to stress Gerrit and JGit to its limits and identify all the limitations and bottlenecks that prevented us from scaling further.

Scaling the test repository
We have created over the months some test repositories that increased in every dimension:

  • Tens of millions of refs as both refs/changes and refs/heads
  • Millions of delta-chains
  • Tens of millions of Git objects
  • Packfiles of tens of Giga-bytes and packed refs of hundreds of megabytes

For generating a significant load on both client and server side, we have invested more into the aws-gerrit cloud setups and gatling-git performance loading tool.

There were some “well-known” issues and additional surprising ones.

SHA1 complexity and CPU utilization for large entities
JGit has been used SHA1 for identifying uniqueness not just for Git objects but also for other large entities. However, computing SHA1 has become increasingly CPU intensive because of the relatively recent findings about collisions on shattered.io.
We have highlighted two major potential improvements in cooperation with Matthias Sohn (SAP) on the raw SHA1 performance and its application for detecting packed-refs changes on the filesystem.

Commit priority queues
JGit has a custom implementation of priority queues which are intensively used in RevWalk, which has almost quadratic complexity. That isn’t a problem for small to medium chains of commits; however, when the number of commits reaches millions, the performance degradation becomes unbearable.
We have replaced the JGit’s custom implementation with the one provided by the Java JVM library, which has a logarithmic complexity that massively improves its performance with large commit chains.

Unwanted reachability checks
JGit needs to perform a full reachability check whenever a remote unknown client is advertising refs, which makes sense when serving a remote client. However, the cost of full reachability of millions of advertised refs can be a daunting task that may be alleviated if the remote end can be considered trusted.

Fixing JGit bitmaps
Since the introduction of Git bitmap, the whole community has learned how key they are in speeding up the counting and selection during the clone phase.
However, large and unoptimized bitmaps could be so unhelpful for Git that instead of speeding up, they could represent a massive overhead for the system, causing CPU spikes and, eventually, lowering the throughput of the server.
Git bitmaps are compressed using the JavaEWAH library, which is good for memory consumption but evil for CPU utilization: that is the reason why the smaller is best for performance.
We have discovered and fixed a critical issue with the JGit bitmap generation that was causing the inclusion of all commits and BLOBs pointed by annotated tags. Also, we have introduced the ability to inform JGit about the heads that can be excluded from the bitmap, allowing to shorten the creation tens of thousands times (5h generation time for a 2k refs to as little as 60s) and increase its effectiveness by 200%.

Millions of unneeded ref logs
When performing a clone of a repository with millions of heads, JGit created one local reflog file for every remote ref, including the ones there were not actually cloned but just fetched as remote references. This was creating a significant performance gap between JGit and Git, which would instead lazily create the reflog files once they are effectively checked out the first time. Cloning a single branch of a repository with millions of remote refs took around 1h, compared to a few minutes of Git.

All of the findings were included in multiple updates on the following components:

  • JGit changes: all fixes were also provided to stable-5.13, the last supported branch for Java 8, which allows benefiting from these improvements for older versions of Gerrit from v2.16 onwards.
  • pull-replication went through major performance improvements, achieving a 1000x times faster execution time compared to the traditional replication plugin
  • aws-gerrit is going through upgrades for making use of pull-replication plugin, including the support for the bearer token which allows to replicate virtually any repository, including All-Users.git
  • gatling-git: we have upgraded the Gatling version and JGit to the latest stable-5.13 to include the latest performance improvements.
  • git-repo-metrics: we have introduced a brand-new plugin that allows us to keep under control the major dimensions of a repository and therefore graph their increase over time.

GerritForge goals for 2023

We are definitely not done yet with the performance improvements on Gerrit and JGit: there are still significant improvements to be made, and JGit changes to get merged into the mainstream branches.
We believe we are on track to finalize the job and allow a stable and scalable platform for large Git repositories in 2023.

Finalise what we cooked in 2022 for JGit
JGit has a new maintainer, David Ostrovsky, awarded in 2022 as Git committer of the project. GerritForge’s devs are focused to get more reviews and attention to the JGit performance improvements. We are committed to finalising all the open changes related to large repositories.

JGit multi-pack indexes support
There is still a major gap between JGit and Git when dealing with very active repositories: multi-pack indexes. The proliferation of packfiles would eventually lead to a long and painful search-for-reuse phase for BLOBs which could be cut down 100s of times with a multi-pack index.

Git repository optimiser for Gerrit
We have been working on tracking the live information on the Git repository, thanks to the git-repo-metrics plugin. Wouldn’t it be nice to have a tool that can do something with it and automatically?
We would be doing R&D on how to correlate the repository metrics, the Git audit trail, and the performance data for making AI-based decisions on what needs to be improved on the repository.
This work stream is going to be useful for any Git repository, not just the ones powered by Gerrit Code Review. The ‘git-repo-metrics’ and the repository optimiser would also apply to other products, including GitHub and GitLab.

Gerrit v3.8 and projects-specific change numbers
We will finalise the design document for the transition to project-specific change numbers in Gerrit v3.8. That would allow the seamless migration of projects across Gerrit setups without having to worry about changes renumbering anymore.

Gerrit Code Review testing and GerritForge-certified binaries
GerritForge is spending a tremendous amount of time developing test environments and tools for serving the Gerrit community with more stable releases and improving the quality of its code. We want to intensify the effort and also offer our platinum support customers a unique service that includes the GerritForge digital signature and rubber stamp on the binaries of Gerrit Code Review and its plugins that have been successfully tested and validated for being production-ready.
Stay tuned; more details are coming soon …

GerritForge company forecast in 2023

GerritForge Inc. will finalise its roll-out to the USA, and all contracts and services will be run from Sunnyvale, CA and Europe. Over 2022, 60% of the customers and businesses have already been moved, and the operation will be completed over the course of 2023.

We are looking forward to doubling our revenue figures in 2023 and also our contributions to the open-source community, with a main focus on JGit as the driver of performance growth for Gerrit Code Review.


2023 is going to be an incredible year for GerritForge, Gerrit Code Review, and the JGit community altogether.

Happy New start of the Year 2023!

Luca Milanesio (GerritForge)
Gerrit Code Review Maintainer and Release Manager
Member of the Gerrit Engineering Steering Committee

The Gerrit User Summit 2022 is back, save the date!

Dear fellow Gerrit User,
We are pleased to announce that GerritForge will be organizing this year’s Gerrit User Summit and Hackathon in hybrid mode: face-to-face and online.

The event is FREE; register and reserve your seat now at:
https://www.eventbrite.co.uk/e/gerrit-user-summit-2022-tickets-424995963367

Gerrit User Summit is the event that brings together Gerrit admins, developers, practitioners, and the whole community, in one place, providing attendees with the opportunity to learn, explore, network face-to-face, and help shape the future of Gerrit development and solutions.

After two years of remote meetings and virtual conferences, this year, we are back face-to-face at CodeNode in the heart of the vibrant City of London.

The dates will be:
Nov 7th to 9th – Hackathon
Nov 10th to 11th – User Summit

Shortly we will be publishing the full schedule and logistics for the event.
I look forward to meeting all the community’s friends, face-to-face or virtually, again during the Hackathon & Summit.

Thanks for being a part of the Gerrit community, and we look forward to seeing you in November.

Luca Milanesio
Maintainer, member of the Engineering Steering Committee, and Gerrit Code Review Release Manager

Gerrit Hackathon is back to London

After two years of remote events and three COVID-19 waves, we are finally back for a new face-to-face hackathon, talking about the future of Gerrit Code Review and coding new and innovative solutions for making Gerrit better, faster and more scalable.

Dates and schedule

The Gerrit hackathon will start on the 9th of May at 9:00 AM for five consecutive days, and will have a daily schedule from 9:00 AM to 7:00 PM BST, with networking and catch-up in the evenings.

For the remote attendees on the US/Pacific time-zone, the schedule will be daily between 7:00 AM to 11:00 AM PDT, which allows 4h of remote interaction with the hackathon in London.

Who is invited to attend the hackathon?

As with every Gerrit hackathon, we have a restricted audience: Gerrit maintainers and contributors are invited to join. We have 10 seats available on-site and 15 seats available remotely, which would allow plenty of people to collaborate and discuss.

The “Alphabet” meeting room of the hackathon

To register to the Gerrit hackathon, add your name and role (“Gerrit Contributor” or “Gerrit Maintainer”) to the attendees sheet. All Gerrit maintainers have edit permissions to the document whilst all other contributors can request permission to edit if they are willing to attend.

Where is the hackathon taking place?

GerritForge will host the Gerrit Hackathon at Huckletree West, Mediaworks, 191 Wood Ln, London W12 7FP. We will be staying at the “Alphabet” meeting room, with a dedicated 10-seats and roundtable, a full-size wall-mounted whiteboard and a permanent online connection and wall-attached screen to interact with all the other remote attendees.

Huckletree West

Huckletree is a creative workspace in West London, based in the heart of White City Place, a thriving new business and cultural district. Alongside the neighboring BBC Studios, Net A Porter Group, and RCA School of Communication, Huckletree West is part of a bold new chapter in the rich creative history of the neighborhood.

For all remote attendees, there will be the ability to connect remotely and interact with the rest of the team on-site during the hackathon hours.

White City and local accommodations

Huckletree West is close to the WestField Shopping Centre in White City, which includes 289 stores, 95 restaurants and Cinemas with 20 screens and almost 3,000 seats.

White City has excellent connections to all parts of London through the London Underground network (Central, Hammersmith&City and Circle lines) and Overground trains, which allow to reach all other parts of the city.

WestField shopping centre – White City

You can look for any Hotel or other accommodation (B&B or Hostels) in other part of London which is covered by the London Underground connections. However, if you are willing to stay local, there are many choices of Hotels and B&B starting from £80/night. See below a list of accommodations nearby White City:

Travelling to the hackathon

By airplane: from London Heathrow terminals, take the Piccadilly Line to Central London till Hammersmith, then take the Hammersmith&City line (station is across the street) until Wood Lane station. From London Stansted, take the Stansted Express train to Liverpool Street station and then the Circle Line to Hammersmith until the Wood Lane station.

By train: from the Eurostar Terminal at St. Pancras International, take the Hammersmith&City or Circle line to Edgware Road until the Wood Lane station.

Taxi: you can use the London Black Cab as well as other cheaper alternatives such as Uber or local minicabs companies.

From the Wood Lane station, there is a 6 minutes walk to reach Huckletree West, located in the MediaWorks building on the ground floor.

Route from Wood Lane Station to Huckletree West

Brexit restrictions

The UK has left the European Union the 1st of January 2021, all travellers from EU needs to follow the new rules for business trips. You can check if you need a VISA using the UK Government site and what is the required documentation and insurance required to show at the UK Border.

COVID-19 restrictions

The UK is set to end all COVID-19 restrictions by March 2022, which means there aren’t any vaccination or testing requirements for the attendees to the hackathon. We advise everyone attending face-to-face to take extra precautions and take a lateral-flow test (LFT) or antigen test before traveling to the hackathon, even though it is not required by law or regulations.

Please note that face covering are still mandatory whilst travelling by airplane, train or underground and during taxi rides.


We are excited to meet again the community of Gerrit Code Review maintainers and contributors after so many months. Come and join us in London this year and we can innovate again and help shaping the future of the Gerrit project, together.

Luca Milanesio, GerritForge
Gerrit Code Review Maintainer
Gerrit Code Review Release Manager
Member of the Engineering Steering Committee of the Gerrit Code Review Open-Source project

Gerrit User Summit is calling: submit your presentation

The Gerrit Code Review community invites you to participate at the virtual Gerrit User Summit 2021 scheduled to take place online on the 2nd and 3rd December, from 8 am to 11 am PST.

This year’s program will offer a keynote, six presentation sessions and seven ‘lightning’ (10-minute) talks distributed in two days, so that you can share your ideas, research, demonstrations, live demos, and network with the Gerrit Community and those interested in learning and adopting Gerrit Code Review in their development process.

Submit your presentation proposal by creating a change to the Gerrit Summit 2021 repository by following those steps:

  • Login to https://gerrit-review.googlesource.com
  • Go to the Gerrit Summit 2021 repository
  • Click “CREATE CHANGE” button and specify the branch (master) and the headline of your talk
  • Click on “EDIT” button on the top-right to edit your change
  • Click on the “ADD/OPEN/UPLOAD” button and enter the filename for your talk (e.g. sessions/super-duper-repos.md for a talk or lightning-talks/mini-session.md for a lightning talk) upload the text for your talk by dragging the markdown text into the window.
  • Click the “PUBLISH EDIT” button on the top-right of the change screen
  • Click on the “MARK AS ACTIVE” button on the top-right of the change screen

Your talk will then be reviewed by the community and, when accepted, merged into the Gerrit User Summit 2021 site.

Gerrit User Summit 2021: how to tackle big mono-repos

GerritForge keeps working to improve performance using Gerrit with very large mono-repos with millions of refs, hundreds of GBs and tens of millions of objects.

On the second day of the virtual Gerrit User Summit 2021, Luca will present the work done over the past two years to overcome significant difficulties when:

  • Reducing the overhead of refs advertisement
  • Speeding-up clones by a 10x factor
  • Reducing the system load when accessing change notes
  • Increasing performance of replication
  • Surviving the deadly “search-for-reuse” phase during git-upload-pack

Register here to join the user summit on December 2nd and 3rd.

Accelerate with Gerrit DevOps Analytics, in one click!

 

Accelerating your time to market while delivering high-quality products is vital for any company of any size. This fast pacing and always evolving world relies on getting quicker and better in the production pipeline of the products. The whole DevOps and Lean methodologies help to achieve the speed and quality needed by continuously improving the process in a so-called feedback loop. The faster the cycle, the quicker is the ability to achieve the competitive advantage to outperform and beat the competition.

It is fundamental to have a scientific approach and put metrics in place to measure and monitor the progress of the different actors in the whole software lifecycle and delivery pipeline.

Gerrit DevOps Analytics (GDA) to the rescue

We need data to build metrics to design our continuous improvement lifecycle around it. We need to juice information from all the components we use, directly or indirectly, on a daily basis:

  • SCM/VCS (Source and Configuration Management, Version Control System)
    how many commits are going through the pipeline?
  • Code Review
    what’s the lead time for a piece of code to get validated?
    How are people interacting and cooperating around the code?
  • Issue tracker (e.g. Jira)
    how long does it take the end-to-end lifecycle outside the development, from idea to production?

Getting logs from these sources and understanding what they are telling us is fundamental to anticipate delays in deliveries, evaluate the risk of a product release and make changes in the organization to accelerate the teams’ productivity. That is not an easy task.

Gerrit DevOps Analytics (aka GDA) is an OpenSource solution for collecting data, aggregating them based on different dimensions and expose meaningful metrics in a timely fashion.

GDA is part of the Gerrit Code Review ecosystem and has been presented during the last Gerrit User Summit 2018 at Cloudera HQ in Palo Alto. However, GDA is not limited to Gerrit and is aiming at integrating and processing any information coming from other version control and code-review systems, including GitLab, GitHub and BitBucket.

Case study: GDA applied to the Gerrit Code Review project

One of the golden rules of Lean and DevOps is continuous improvement: “eating your dog food” is the perfect way to measure the progress of the solution by using its outcome in our daily life of developing GDA.

As part of the Gerrit project, I have been working with GerritForge to create Open Source tools to develop the GDA dashboards. These are based on events coming from Gerrit and Git, but we also extract data coming from the CI system, the Issue tracker. These tools include the ETL, for the data extraction and the presentation of the data.

As you will see in the examples Gerrit is not just the code review tool itself, but also its plugins ecosystem, hence you might want to include them as well into any collection and processing of analytics data.

Wanna try GDA? You are just one click away.

We made the GDA more accessible to everybody, so more people can play with it and understand its potentials. We create the Gerrit Analytics Wizard plugin so you can have some insights in your data with just one click.

What you can do

With the Gerrit Analytics Wizard you can get started quickly and with only one click you can get:

  • Initial setup with an Analytics playground with some defaults charts
  • Populate the Dashboard with data coming from one or more projects of your choice

The full GDA experience

When using the full GDA experience, you have the full control of your data:

  • Schedule recurring data imports. It is just meant to run a one-off import of the data
  • Create a production ready environment. It is meant to build a playground to explore the potentials of GDA

What components are needed?

To run the Gerrit Analytics Wizard you need:

You can find here more detailed information about the installation.

One click to crunch loads of data

Once you have Gerrit and the GDA Analytics and Wizard plugins installed, chose the top menu item Analytics Wizard > Configure Dashboard.

You land on the Analytics Wizard and can configure the following parameters:

  • Dashboard name (mandatory): name of the dashboard to create
  • Projects prefix (optional): prefix of the projects to import, i.e.: “gerrit” will match all the projects that are starting with the prefix “gerrit”. NOTE: The prefix does not support wildcards or regular expressions.
  • Date time-frame (optional): date and time interval of the data to import. If not specified the whole history will be imported without restrictions of date or time.
  • Username/Password (optional): credentials for Gerrit API, if basic auth is needed to access the project’s data.

Sample dashboard analytics wizard page:

wizard.pngOnce you are done with the configuration, press the “Create Dashboard” button and wait for the Dashboard, tailored to your data, to be created (beware this operation will take a while since it requires to download several Docker images and run an ETL job to collect and aggregate the data).

At the end of the data crunching you will be presented with a Dashboard with some initial Analytics graphs like the one below:

dashboard-e1549490575330.png

You can now navigate among the different charts from different dimensions, through time, projects, people and Teams, uncovering the potentials of your data thanks to GDA!

What has just happened behind the scenes?

When you press the “Create Dashboard” button, loads of magic happens behind the scenes. Several Docker images will be downloaded to run an ElasticSearch and Kibana instance locally, to set up the Dashboard and run the ETL job to import the data. Here a sequence workflow to illustrate the chain of events is happening:

components.png

Conclusion

Getting insights into your data is so important and has never been so simple. GDA is an OpenSource and SaaS (Software as a Service) solution designed, implemented and operated by GerritForge. GDA allows setting up the extraction flows and gives you the “out-of-the-box” solution for accelerating your company’s business right now.

Contact us if you need any help with setting up a Data Analytics pipeline or if you have any feedback about Gerrit DevOps Analytics.

Fabio Ponciroli – Gerrit Code Review Contributor – GerritForge Ltd.

GerritHub and Zero-Downtime Upgrade

GerritHub gets bigger on Mon, 21 March 08:00 GMT


GerritHub has experienced unprecedented growth over the past two years. The November 2015 numbers presented at the Google User Summit in Mountain View – CA have been surpassed again, and we do need to make sure that our infrastructure is still capable of dealing with current and future users’ needs.

What is changing in GerritHub.io?

We are changing everything, from the version of Gerrit to the hardware, network and storage infrastructure. Data, DBMS, Indexes and cache, need to be upgraded and refreshed to make sure that the new systems are reflecting exacting the current production data and sessions.
We are changing as well the geo-location of our servers, from the current server farm in Germany (Bayern, Nuremberg – 100 MBps) to a new server farm in Canada (Quebec, Beauharnois – 1 GBps).

Why have so many changes?

We started to measure some significant delay in the Git and review operations on the old infrastructure, mainly due to three factors:

  1. More users, more repositories, more concurrency. Individuals, OpenSource projects and Businesses started using GerritHub.io for their mission-critical repositories, considering Gerrit the “source of truth” of their review workflow. We needed more horsepower, memory, storage and ability to scale even further.
  2. Bandwidth from USA and Far-east. The majority of people using GerritHub.io are from the other side of the Atlantic Ocean: this is typically not a problem from 7 AM to 3 PM … but after 4 PM the connectivity between Europe and the Americas becomes slow. Additionally, people using GerritHub.io from India, Japan, Australia and New Zealand experienced terrible slowdowns because of the excessive number of hops to reach Germany.
  3. Gerrit master is much faster. Based on the current data and metrics measured on GerritHub.io, we have contributed a lot of patches to reduce the overhead caused by Gerrit DB and lessen the number of SQL queries per minute. All those new improvements are on Gerrit master, and we need to catch-up with the “latest and greatest” version.

Will I experience any GerritHub.io outage?

Last time that GitHub needed to make a major upgrade, asked his 5M users to stop working for 23 minutes,. This translates to a loss of two millions of hours of continuous delivery lifecycle, equivalent to over 130 man/years, worth no less than eight millions dollars.
We are going to adopt a new Zero-Downtime Gerrit roll-out strategy to make sure that all those changes are not going to impact your day-by-day activity. If you were not reading this post you would possibly even not notice the “switch” from the old to the new infrastructure, apart from the increase in speed and bandwidth.

Zero-downtime GerritHub.io migration, step by step with the associated expected timings.

Phase 0 – Replication to the new Gerrit infrastructure. (- 1 month ago)
We started migrating everything one month ago, and the old and new infrastructure are working side-by-side, thanks to Gerrit master-slave replication. The new Gerrit servers are active as slaves and are read-only.

Phase 1 – Migration kick-off. (08:00 GMT)
We install a Gerrit plugin that rejects all the push to GerritHub.io repositories providing a courtesy message: “Gerrit is under maintenance, all projects are READ ONLY”. All the HTTP POST, PUT and DELETE are disable on the Gerrit REST-API.

Phase 2 – Wait for replication events to complete and migrate DB. (08:02 GMT)
Git repositories are continuously replicated, but we do need to make sure that the event queue is empty. Once that happens we schedule the last final DB migration to the new infrastructure.

Phase 3 – Gerrit DB upgrade and reindex (08:04 GMT)
New Gerrit server executes the final upgrade and off-line reindex of the latest received changes.

Phase 4 – Gerrit start-up and cache warm-up (08:05 GMT)
New Gerrit is restarted and the most critical Gerrit caches (projects, accounts and groups) are pre-loaded in memory. This allows the incoming traffic spike to avoid the collapse of used threads and makes the transition as smooth as possible without slowdowns.

Phase 5 – Traffic switch and DNS updates (08:06 GMT)
GerritHub.io redirects all incoming HTTPS and SSH traffic to the new infrastructure. Git pushes and HTTP PUT, POST and DELETE operations of the REST API are operational again and served by the new Gerrit infrastructure. GerritHub.io DNS is updated to the new Canadian IPs.

Phase 6 – New IPs gets propagate to all worldwide DNSs (+ 1 day)
Once all the DNSs in the world would have been updated, everyone will start going directly to the new infrastructure without further hops or redirection from Germany. Customers from USA, Canada, South America, Asia, Japan, New Zealand and Australia should see a significant reduction of the network latency and increase of GerritHub.io responsiveness.

Firewall and SSH considerations

Even if Gerrit server’s SSH key is not changing, some of you may see a warning similar to this when they push or pull over SSH:

Warning: the RSA host key for ‘review.gerrithub.io’ differs from the key for the IP address ‘148.251.77.70’

The warning message will also tell you which lines in your ~/.ssh/known_hosts need to change. Open that file in your favorite editor, remove or comment out those lines, then retry your push or pull.

Should your network have some strict firewall rules to access external sites, you may want to whitelist the IP of the new infrastructure WLB to: 192.99.233.76.

Follow GerritHub.io migration progress.

We will advertise the migration progress on Twitter at @GitEnterprise. Should you have any issue you can tweet us or contact GerritForge Customer Support at support@gerritforge.com.

GerritForge helps Gerrit 3.0 stability

Gerrit 3.0 plan announced: we need stabilisation now

Screen Shot 2015-11-23 at 09.52.19

Gerrit 3.0 plan and its NoteDB reviews have been officially announced at the Gerrit User Summit 2015. It is already available as an experimental feature in the current Gerrit master but it needs much more stability in order to be officially supported for production.
GerritForge decided to help and reuse its existing continuous integration system to validate every Gerrit patch set against the current and the new NoteDB review persistence back-end in order to avoid regressions during the 2.13 and 3.0 development

Pre-commit validation by GerritForge CI

If you have posted a patch to gerrit-review.googlesource.com in November, you may have hopefully received a Verified+1 from a strange user with a Diffy logo on the side.
GerritForge’s provided CI on gerrit-ci.gerritforge.com fetches automatically every patch-set pushed to gerrit-review.googlesource.com and triggers a slightly modified Gerrit build with the purpose of checking whether the code change introduces a regression or not. This may seem at first sight a quite normal Gerrit to Jenkins job integration, however implementing it on top of Google’s multi-master replicated installation was not a piece of cake.

Gerrit Trigger plugin limitations on multi-master setups

Jenkins has already an out-of-the-box integration with Gerrit provided by the Gerrit Trigger plugin maintained by Robert Sandell – Cloudbees. It leverages the Gerrit stream events through an SSH channel and make use of Gerrit REST API to action them according to the build result.
The Google’s Gerrit setup, however, is not a trivial one-node installation and is further limited by the security constraints of the Google infrastructure, which does not allow any incoming SSH connectivity.
Additionally all concept of “getting the events in a stream” isn’t going to work when events can come concurrently from multiple places at the same time: who is going to define the “global ordering” and how to put all those events in a single TCP/IP Socket? Even UDP would not work in this case because SSH channel requires confidentiality between two and only two peers.

Alternatives to SSH

During the hackathon, other approaches have been discussed by Shawn Pearce, including the use of HTTP WebSockets (or Cometd) for fetching events without the need of an SSH connection. Events are still distributed and generated by multiple masters all the time, and the Jenkins plugin would then have the onus of contacting all the Gerrit servers and keep a connection opened to all of them. This is clearly not going to work because the number of servers, their IPs and locations may change at any time and the solution would eventually be in danger of losing precious events.

Back to polling

The only solution we envisaged was to fall back to a polling logic where Jenkins ever 10 minutes is asking Gerrit “what’s new since last time we spoke?”. This solution goes against the main reason the Gerrit Trigger plugin was designed: avoiding SCM polling. It is, however, a much better and optimised polling strategy and let’s see why.

Query and then fetch

The typical Git SCM polling relies on fetching all references every poll interval and detect if new Git commits are available. This is notably slow and generates a huge overhead on the Git server. The approach we took is quite different and makes use of the Gerrit search capabilities that are way faster and more powerful than a simple Git fetch.
Jenkins first ask Gerrit the list of changes and associated commit-IDs involved in any event since the last polling time: the result may include patchsets that have been already built to avoid having any gaps between polling intervals. The search is fast and implemented in … you know, Google is a search company isn’t it?
Once the list of candidate commit-ids is identified, Jenkins goes through all of them and checks using the Gerrit REST-API:
– has it been build during my previous execution?
– has it been already accepted (or rejected) by me?
The Commit-IDs that results as not being checked before and not yet validated are then used to trigger a specific job parametrised on:
– Specific branch
– Specific change ref-spect
Fetching is performed avoiding any wildcard and the corresponding load on the Git server is minimum. Fetch (Git protocol) + build (using Buck) + test (unit + integration) + review feedback (REST API) is taking an average of 5 minutes, which is an amazing result if you consider the size of the Gerrit project and the typical slow speed of a default Jenkins Git fetch.

The bottom line

Using the query + fetch approach, which seemed a bit slow and old-fashioned at the beginning, was eventually very simple and successful. Instead of setting up SSH hostkey verification, key exchange and ad-hoc channels, the only configuration needed is a valid Gerrit user and the HTTPS endpoint URL, the same used for cloning the code.
The solution is much more reliable as SSH channels are notably unstable and consume server threads. The only drawback is the slight delay between the patch-set upload the start of the build (at max 10 minutes) which is acceptable in most cases.
Results
Since its roll-out more than 1200 patches have been checked and rated, a lot of potentially Gerrit regression avoided and more importantly we have prevented the NoteDB code to start diverging regarding stability from the current mainstream development.

How can re-trigger validation for a single change?

We have enabled anyone to trigger ad-hoc executions of the Gerrit validation flow using the following URL:
https://gerrit-ci.gerritforge.com/job/Gerrit-verifier-flow/build
This is a standard Jenkins parametrized build that request the change-id to be built, as either SHA1 or number. Once the job is triggered the build will be executed and the validation feedback applied to your change, regardless of the previous build or validation status.

Gerrit User Summit & Hackathon 2015

Gerrit User Summit at Google Mountain View – CA

Gerrit user Summit 2015

Exciting times this year at the Gerrit User Summit and Hackathon 2015: the major contributors and players of the Gerrit community shared experiencing, opinions and news in an intense 7-days event in Google – Mountain View – CA.

The User Summit

The first two days have seen the User Summit at the centre of the stage: scalability, scalability and scalability have been the Leitmotif of the discussion.
GerritHub.io started the saga with the astonishing numbers of the two years growth:

  • 6.5K users (+580%)
  • 41K changes (+1700%)
  • 16K projects (+530%)
  • 500GBytes (+500%)

The two years of live production experience with users coming from GitHub have highlighted problems (replication, repositories sharding, multi-master) and possible solutions, ranging from Gerrit Virtual Private Hosting to the on-premises deployment integrated with BitBucket and GitHub:Enterprise.
Ericsson continued with a very detailed description of how Gerrit is used as “Enterprise Washing Machine” of all code that goes through the development pipeline: scalability and control are the fundamental keywords that were repeatedly mentioned and enforced.
The replication across sites have been massively improved regarding performance and stability since the introduction of Git/HTTPS as the main protocol for replication, as previously advised by GerritForge in 2014.
The first day ended with Google smashing out everyone his hypersonic numbers:

  • 1.6M changes / 3.8M patch-sets
  • 240 Gerrit virtual nodes
  • 2.5M repositories

The second day was all projected on the future of Gerrit, with three very interesting features coming soon.

Gerrit 3.0 and NoteDB

Dave Borowitz presented the status of the replacement of Gerrit DBMS
with a 100% open standard solution based on commit notes, implemented by all OpenSource and Commercial “flavours” of Git. The solution will allow interoperability with other code-review implementations (e.g. Phabricator) and fully enable reviews replication and off-line operations.
The new DBMS-free version of Gerrit will be called the Ver. 3.0 and it will be the next version after current Gerrit master (Ver. 2.13) gets released. released.

Gerrit PluginManager: one plugin to rule them all

Luca Milanesio presented a new vision on how to discover and install plugins on the Gerrit platform: Gerrit PluginManager. We should not bundle more and more plugins with Gerrit, which eventually would lead to an explosion of the WAR file-size. You can now install only a “plugin-manager” which then guide you through the “one-click” set-up of all the others.
Differently from similar solutions such as Jenkins Update Centre, the Gerrit PluginManager is based on the live status of the plugins and their compatibility with Gerrit: as soon as one plugin gets patched and successfully compiled with a version of Gerrit, it will automatically be listed and made available by the PluginManager. Additionally GerritForge will provide a list of certified and guaranteed plugins that have been successfully tested with Gerrit.

PolyGerrit, the new web-component UX for Gerrit.

Andrew Bonventre is the Googler that is driving the transformation of Gerrit UX to the new Polymer platform, however for personal reasons he was not able to attend the Summit. Dave Borowitz replaced him and unveiled that will be the (very) near future of Gerrit UX: no more GWT and complex and black magic transforming Java in obscure JavaScript, the future is coming through web-components, a new emerging and promising standard for the HTML and modern web-browsers.
Despite the UX being at very early stages, Dave was able to showcase a fully functional list of changes and search box powered by Polymer and calling Gerrit REST-API, really fast and promising!

Gertty, the text-only Gerrit Code Review.

On the complete opposite side, why not using Gerrit from an 80×25 text-only console? Gertty is an astonishing 100% char-based console, all based on Gerrit REST-Api and a local SqLite local DB for caching changes. Allows complete off-line operations and synchronisation with Gerrit changes. Productive and effective while you are on the go.

CollabNet and Gerrit tuning cheat-sheet

CollabNet presented a useful four pages brochure to guide through the tuning of a Gerrit set-up for a small, medium or large installation. Based on their experience of running TeamForge SCM, the commercial fork of Gerrit Code Review based on their existing TeamForge ALM proprietary solution, they have been able to experiment the Gerrit default settings and learn how to adjust them to leverage the full power of your setup.
The audience appreciated the effort and encouraged CollabNet to post all the findings as Gerrit reviews to get the code-base better and improve the default settings of the set-up.

Gerrit and Jenkins Workflow dance together with Docker.

Cloudbees presented a very effective demo on how to use Gerrit and Jenkins Workflow plugin to implement a real-life Continuous Delivery Pipeline. The presentation leveraged the use of Docker images as previously presented by Stefano Galarraga in his “Gerrit and Jenkins Continuous Delivery Pipeline for BigData” talk. They both explained and showed how Code Review is key in implementing a successful and smooth code validation and roll-out. Stefano’s presentation made use of the new exciting feature of “topic submission” that enables the grouping and commit of multiple changes across repositories.

The Hackathon, topics and improvements.

Gerrit Hackathon 2015

A lot of code have been pushed during the hackathon:

  • 400 changes merged
  • 3 Gerrit version released

Gerrit metrics

Surely the hot-topic of the hackathon has been the introduction of a new pluggable metrics engine in Gerrit, currently half-merged in master branch. Gustaf demoed on how now is possible to use standard tools such as Graphite and JMX console to extract, display and graph the most relevant Gerrit metrics in real-time. This is similar to what the JavaMelody plugin was providing, with the added value of getting the data outside the Gerrit JVM and analyse with greater detail and a standard monitoring platform.

PolyGerrit

Gerrit master has been officially upgraded to allow the development of the new Polymer-based UX for Gerrit, code-named “PolyGerrit”. Gerrit master will need from now on the installation of NodeJS during development. This is needed for building and packaging the “vulcanised” version of Gerrit UX which contains the basic components of the user interface. At the moment the only thing that will be visible is the demo of list of changes presented by Dave Borowitz at the User Summit, however new changes are coming over and Google announced that they are targeting Q4-2015 for a first internal release of the new UX.

GerritForge CI Verifier

As Gerrit 3.0 will be completely revamped in terms of reviews persistence, the community pushed for having a stricter changes validation on both the old DBMS and new NoteDB based persistence. GerritForge extended the use of the CI system (https://gerrit-ci.gerritforge.com) to cover the validation of every change / patch-set that will be uploaded to Gerrit from now on. This is a substantial improvement on the Code Review workflow of Gerrit itself and will hopefully contribute to a stable and solid Ver. 3.0 release next year.

Externalisation of Gerrit Hooks and Events to plugins

Qualcomm worked at completing the externalisation of Gerrit hooks and stream events into plugins. This change will allow to plug different events providers depending on the type of Gerrit set-up, single node or multi-master. One more important step towards an OpenSource implementation of Gerrit Multi-Master.

GitHub outage, again :-( What is the real cost of FREE services?

Screen Shot 2015-05-06 at 12.47.43

As a bitter surprise today, we are experiencing another GitHub outage. This time it seems a more serious problem than the average DDoS: GitHub’s Ops Team is perform an emergency maintenance on the whole site to recover the situation.

How much a FREE GitHub Service outage really costs me?

Everyone loves GitHub because it is nice, easy and most of all … it’s FREE ! Lots of projects started using it for much more than pure source code versioning:

  • People write books and documentation with it (see gitbook.com)
  • Teams started using it as free artifacts repository manager: projects wouldn’t build at all when GitHub is down
  • Companies started hosting web-pages on GitHub (see the nicely rendered microsoft.github.io)
  • GitHub Issue tracking and wikis are so simple that people are using for project collaboration

When everything works, it is amazing how your Team can be productive using GitHub on a daily basis. But when it fails, what can you do? And what if my Team cannot progress because they can’t see the tasks, wikis, requirement documents, web-pages … how much money am I really wasting when people is hanging around for hours?

Let’s consider a small Agile Team composed by a 1 x BA, 8 x Agile Devs, 1 x Scrum Master, 2 x DevOps and 2 x QA: a 30′ minutes outage like the one today would have an impact on 16 people of 1 man/day that means (for the US market) roughly $1,000 (as optimistic guess, it may cost even more). Even if GitHub goes down twice a year (gosh this happened more than twice I am afraid) your start-up will end up paying around $ 2,000 /year for GitHub. The overall amount doesn’t sound that expensive … but you wonder why GitHub “was supposed to be really FREE” if you end up spending money with it.

If we apply the same figures to a medium size company with at least 160 people working on development, your overall figure would jump to $20,000 /year. More importantly the time lost and delays caused on the project schedule may then have an avalanche effect on other teams and maybe causing additional  pain and costs across your organisation and programme plan. Those extra costs can be sometimes difficult to quantify but for sure are much more relevant on your overall business.

Shall we give up using GitHub then? Or shall we move to GitHub:Enterprise instead?

The typical reaction to a GitHub outage is: “we cannot rely on the FREE version, we should buy GitHub:Enterprise which will run inside our company network” and use this argument with your manager to get a Purchase Order finalised NOW (I may be too malicious … but a outage may actually generate more money to GitHub than loss of reputation). When you look at the GitHub:Enterprise pricing it ends up that for your 160 people you would need to spend only $36,000 /year which seems on the same order of magnitude of your $20,000 wasted money without considering the extra hidden costs of project delays.

But are you really solving the problem? GitHub and GitHub:Enterprise are the same product, same code-base, just different pricing. What makes you wonder that your internal Ops Team can do a better job than GitHub? What makes you wonder that a GitHub bug would not appear on your GitHub:Enterprise set-up? Are you just an optimistic person?

Moving to GitHub:Enterprise is typicall needed when you have compliance / security requirements on data at-rest, but is not really addressing the problem of reliability and would potentially expose your Team to even further outages for software upgrade and management that typically you don’t have using GitHub alone. You are then spending $36,000 on top of your $20,000 (or even more) wasted previously without having real benefits.

Learning how to fly with GitHub

How to solve the problem then? Can we learn from somebody’s else experience?

Airplanes have exactly (if not even more demanding) requirements on their engines as we on a Version Control System. For an aircraft the cruising speed is everything, without that speed provided by its engines he cannot fly; we have similar requirements in our Development Team where GitHub is really what we need for progressing our development otherwise we are blocked.

The solution to the problem for an airplane to be reliable is not buying more expensive engines (which are not necessarily more reliable) but instead using two engines instead of one. Can we apply the same to GitHub? GitHub is in a nutshell a Git Server, why not relying on redundancy and replication? Can I set-up a replica of GitHub and use it for my reviews?

You can of course build your own replica using plain Git and GitHub WebHooks: it would require a bit of scripting but it can be done. During an outage you can use the replica and when GitHub is back all the pending changes can be pushed back to GitHub.

Can I have another FREE and automated replica of GitHub?

This is becoming challenging now: we want something that is completely FREE (no time spent in writing scripts, webhooks, no service provider to pay, no commercial product) but that allows us to use GitHub replicated, including Code Reviews.

It seems strange but what we are looking for actually exists and it is an OpenSource project called Gerrit Code Review. It is not only a Code Review and Git Server like GitHub but offers as well more advanced security and replication capabilities. It has been designed taking into account the needs of large distributed Teams and making their daily development lifecycle more reliable independently from local failures.

Cool, how can I get started with Gerrit and GitHub now with no hassles?

You read this quick introduction for getting started in setting up your private replica or, you are really in a hurry and you wanted a FREE hosted service, you can sign-up with 3 clicks to GerritHub.io.

I have only 5 mins of free time today: what can I read/watch to understand how it works?

Well, there are plenty of resources but if you are really in a hurry, you can watch the following YouTube Video:

If you have more time, you can read the Gerrit Code Review overview and tutorial at: https://review.gerrithub.io/Documentation/intro-quick.html

Get ready now to avoid wasting again money when the next GitHub outage … that nobody wishes … will (sadly) happen 😦