GerritHub.io and the new Reviewer role

Screen Shot 2018-06-21 at 08.57.58

Good news for all the people that wanted to use Gerrit Code Review on top of their GitHub repositories, but so far have been concerned about sharing their profile information: you can now keep your details private and still review other people’s changes on Gerrit.
If you are an EU citizen, you have rights guaranteed by the GDPR and the default role for accessing your GitHub OAuth scope can now be limited to what is the bare minimum.

The default scope explained

When signing up with GerritHub.io, you have the ability to define the scope of access to your personal information held on GitHub:

  • Your e-mail
  • Your membership to organizations and teams
  • Your repositories

The default access requested for doing some activity on Gerrit is: user:email + public:repo + read:org, that allows Gerrit to see your e-mail, clone and push on your behalf to your public repositories and see the list of your teams and organizations. Those permissions are needed when you want to push some code to GerritHub.io, and thus you need to allow Gerrit to resolve your groups (Organizations/Teams) and grant you permissions to push your code to Gerrit and eventually GitHub as well on your behalf.

The problem with GDPR

The default role may not fit with some of the conditions of GDPR for EU where you are not willing to contribute any code but participate in the review of existing changes. That’s why GerritHub.io introduced a new GitHub scope page that allows external reviewers to sign-up with the bare minimum information that Gerrit needs to let you in: your e-mail address.

Why can’t I stay anonymous on Gerrit?

Even though you could create a GitHub account without exposing any private information or exposing your e-mail, you cannot sign-up with Gerrit Code Review if you are not willing to share who you are.
The e-mail address in Gerrit is a crucial property, and without it, some parts of the code-review workflow would stop working. That condition of Gerrit is so pervasive in the architecture because of the requirements of the Android OpenSource Project (code-named AOSP) which is the project historically hosted on the platform: the . See the details of the Android contribution license and process at https://source.android.com/setup/start/licenses.

What if I still want to stay completely anonymous on Gerrit Code Review?

Gerrit has the concept of the “Anonymous Coward” (not an offense, just the default name assigned to it), which is an account that has only an ID without either a full name or an e-mail address. With the anonymous user, however, you cannot review code, for obvious legal reasons and for preventing spamming and abuse.

I am in doubt on what to chose, where to start from?

If you are concerned about sharing your Personal Information and you need to decide which level to use, then just chose “Reviewer” and you can amend your choice and extend your scope at any time later.

GerritHub.io and GDPR

Screen Shot 2018-05-25 at 16.17.49

GerritHub.io has changed some of its key components to be compliant with the new regulations related to the European General Data Protection Rules (aka GDPR). You can find the updated Privacy Policy on the GerritForge Web Site.

Impact on Gerrit Code Review

Gerrit Code Review had a fundamental problem with some of the concepts in the GDPR: accounts could not be removed from the system once created the first time. Yes, you could have disabled them, but their associated personal data would have remained in the Gerrit DB or All-Users git repository.

We have developed a brand-new plugin for Gerrit that enable the following features:

  1. General permission to allow the removal of accounts
    Group of users can be deleted to remove their own accounts data. Gerrit administrators can be delegated to remove other accounts.
  2. New SSH command for account removal
    Users that have been granted permissions, can access a new ‘account’ command to remove their own personal information.
  3. Simple and easy self-service online form to review its Personal Information and self-remove its account.

The new plugin has been shared with the wider Gerrit Code Review community and will be soon public domain for every other publicly available website that wants to host Gerrit and being publicly accessible by EU citizens.

Access and control of your Personal Information

  • Right to Access
    Every personal profile is directly accessible to its owner through the User’s settings in Gerrit Code Review.
  • Data Portability
    Gerrit stores the user’s profile in a specific set of branches of the All-Users repository. GerritHub.io can make the personal branches available to everyone that would like to have direct access to its data and reuse or import them into its system.
  • Privacy by Design
    Gerrit already stores only hashed passwords. Additionally, the GitHub credentials are never passed to Gerrit Code Review and the authentication and profile access delegation process is completely controlled by the user thanks to the OAuth 2.0 and Scope selection process.
  • User Data and “right to be forgotten”
    Thanks to the new Gerrit ‘account’ plugin, every user on GerritHub.io is always in control of its data and can decide to remove its account permanently at any time.
    Note that Gerrit will have to “remember” the account UUID for consistency of all the previous review work done by the user. Every account removed from the platform will become an “Anonymous UUID” and will not be able to be reused anymore in the future.

Questions?

If you still have doubts or questions about Gerrit Code Review and the EU GDPR regulations, you can get in touch with GerritForge Ltd.

GerritHub adopts 100% PolyGerrit

p-logo

GerritHub.io has been successfully migrated Gerrit Code Review v2.15. Thanks to the 5-days Gerrit Hackathon hosted by Axis in Lund, all the remaining issues we had on v2.15 have been resolved and all the 15k active users of GerritHub from today can use the 100% feature complete PolyGerrit UX.

What is PolyGerrit?

PolyGerrit is the code-name of the new UX wholly redesigned using web components, a set of web platform APIs that allow you to create new custom, reusable, encapsulated HTML tags to use in web pages and web apps. Custom components and widgets build on the Web Component standards, will work across modern browsers, and can be used with any JavaScript library or framework that works with HTML.

To access the PolyGerrit, just go to the GerritHub.io footer and click on the link “Switch to New UI” or add “?polygerrit=1” as an extra query string (e.g. https://review.gerrithub.io/?polygerrit=1)

To have a more comprehensive description of PolyGerrit, you can read the previous blog post about the Google talk at the past Gerrit User Summit in London.

Zero-downtime “ping-pong” migration to Gerrit v2.15

As usual, we migrated with near-zero-downtime, with only a “ten minutes *read-only* window” where we were waiting to drain the final replications were moved between Data-Centers.

There are two Data-Centers (DC) active for GerritHub.io; the main one is hosted in Canada and the second in Germany. Both DC have a high-availability configuration and run the same version of Gerrit with the same data. However, during major upgrades, we use a “ping-pong” technique to give a seamless experience to the users.

Ping phase

DC-Canada is with Gerrit v2.14 and DC-Germany gets upgraded to v2.15. Traffic gets forwarded smoothly from DC-Canada to DC-Germany thanks to the HAProxy that is serving traffic for https://review.gerrithub.io.

Pong phase

DC-Canada gets upgraded to v2.15 and DC-Germany sync back to Canada in near-real time. Once the upgrade is complete, the HAProxy forwards back the traffic to DC-Canada.

Benefits of the ping-pong upgrade

There two significant benefits of using the combined zero-downtime rollout with the ping-pong technique:

  1. No general service disruption, minimal read-only time.
    Nobody would notice any significant service disruption on GerritHub.io: the read-only window is a minor service degradation that lasts for only a few minutes. Given that 90% of the traffic is represented by “git fetch/clone” and web browsing, the degradation is hardly noticed by anyone.
  2. Validation of the disaster recovery procedure.
    Because the DC-Germany is used as disaster recovery site, it is essential to make sure that is always working fine and you can actually failover to it at any given time when needed.
    You do not want to find out that the disaster recovery isn’t working when is too late.

What’s new in Gerrit v2.15?

There are many changes in Gerrit v2.15, for the details you can have a look at the Google presentation at the Gerrit User Summit 2017 in London.

In a nutshell, here are the headlines of the most visible changes you will notice:

  1. Support for draft changes and draft patch sets has been completely removed.
    You have now two possible states for a change: WIP (work-in-progress) and Private. All the changes that were in “draft” status at the migration have been moved to WIP state.
  2. New URL Scheme
    Gerrit URLs generated and used by the UI include not just the change number but the project name as well.
    For instance, the Change 123 on project ‘myproject’ would now be accessible on the URL: https://review.gerrithub.io/#/c/myproject/+/123.
    Existing URLs containing only the change number (e.g. https://review.gerrithub.io/123) are redirected to the new scheme.
  3. New workflows on the PolyGerrit UX
    The PolyGerrit UX is now 100% feature complete. It is not only an engineering rewrite but also a whole redesign of the user-flows and experience.
    See at https://www.gerritcodereview.com/releases/2.15.md#new-workflows the details of all the new flows.

What’s next?

Gerrit v2.15 includes a brand-new storage for reviews, code-named “NoteDb”, that is actually your Git repository itself. That means that all the meta-data, comments, scores, history, audit, will all be stored in the same GitHub repository with your code.

Our next step is to perform an online migration from ReviewDb to NoteDb, which will be again with zero-downtime. Thanks to the fantastic work made by Dave Borowitz (Google), there will be no need for a read-only window: you will not even notice.

What do know all the details of Gerrit v2.15?

For an overview of what’s new in GerritHub with v2.15, you can look at the Gerrit User Summit 2017 presentation.

 

GerritHub.io donations to Shawn’s family

Screen Shot 2018-01-30 at 22.11.20.png

Free and OpenSource, always

We never had a “Donate” button on GerritHub.io, we never needed one because Gerrit Code Review has always been OpenSource and Free and always it will be.

We have all benefited from the dedication and passion of the team of 600+ Contributors that every day spend their time innovating and improving the code. Thanks to all these people, we do have a world-class Code Review tool that over 25,000 developers are using every day.

For over five years, GerritHub.io has been on-line with NO-Ads, NO Premium plans. Many important Companies such as Intel and RedHat, and OpenSource projects use GerritHub.io every single days for their cooperation and daily reviews. See below some of the most active ones:

Time to say “Thank you Shawn”

Shawn Pearce, the project founder of Gerrit Code Review, died two days ago, leaving a loving wife and his children behind. In many ways, Shawn’s daily work has been put continuously in the software that keeps GerritHub.io alive every day.

It is about time to pay a tribute to Shawn dedication and make his dream to become true: giving the support and help to his family in this difficult moment.

From today, you will see a “MAKE A DONATION” button on every page of GerritHub.io that points to the “Shawn Pearce Memorial Fund“, a fund dedicated to raise the money for his family.

Spreading the word in the OpenSource communities

The news of Shawn Pearce death has gone through just a few mailing list (Git, JGit and Repo-Discuss) and I am sure that many of the people using Gerrit Code Review may not be aware of it.

Share this post, start using GerritHub.io on your OpenSource project and MAKE A DONATION to Shawn’s family.

Giving to Shawn’s family and his children a future, is the most concrete way to say “Thank you for having created Gerrit Code Review”.

 

Shawn Pearce: a true leader

Shawn Pearce Memorial Fund

Shawn Pearce was the founder of JGit and Gerrit Code Review OpenSource projects. I am truly honored to have worked with him on the same project, even if we were sitting on the opposite sides of the Atlantic Ocean and working in offices with different company names.

Many times people talk about real leaders, but having known someone like Shawn for over eight years makes a big difference. We both started with the same ideas back in 2008 even before meeting each other. Shawn had the goal of making Git scalable for its adoption in the Android OpenSource project and started by rewriting the Rietvield project in pure Java, while I had the purpose of extending the use of Git version control to large Enterprises in the world, with the GitEnterprise.com service.

Embracing ideas

We first met face-to-face in 2011 at the GitTogether in Mountain View CA at Googleplex. Shawn was the initiator of this famous “unconference.” It was an incubator of ideas and creating minds that were spending time together to exchange and enrich each one experience with new revolutionary concepts.

I was new, and I did not know anyone from the project. My ideas were very different from whatever discussed by the people in the room. Then I stood up and proposed to change the architecture of the project and make it fully pluggable, inspired by the experience and success of Jenkins CI.

Shawn did not have a clue who I was, where I was coming from, what was my experience or my working history: he just answered “yeah, that makes sense. Would you like to come and help us?”. And so I did and we worked together for a week where my journey with Shawn all started.

Inspiring people

What I liked about Shawn was his unique style of leading the project. He never announced or presented himself with glamour at the conferences, but everyone knew who he was and when he entered the room always captured the interest and attention of everyone.

The passion of talking about the challenges he was facing with the project itself and always pushing the boundaries of what could be achieved was contagious. He never said to the people what to do and still left everyone to make their mistakes to learn and improve.

Shawn was the seed that made the project grow far beyond its initial size and scope. He was able to take promising young people like Dave Borowitz, who started just a year before as an intern at Google and let him rewrite the entire backend to implement NoteDb.

Humanity first

fullsizeoutput_4.jpg

Shawn was bright and genius, but what inspired me most of all was his passion for his children and his lovely family. He was used to having his son coming over the weekend to our Hackathons as well and be close to his dad. He merged his family life with his daily work and passion. The best moment I remember was when we had a “hacking barbecue” at his new house in San Jose facing the beautiful hills at sunset, with Noah playing in the garden and hugging his loving dad.

He always looked at combining the needs of his family and the presence and duties of the Gerrit project, which remembered everyone that people are the most critical part of it.

Dedication

ShawnPearce.Commits.Analytics.png

The loss of Shawn was for a lot of people unexpected: he kept on posting code and feedback on the mailing lists until his very last days. He implemented brand-new support for OpenSSH in Gerrit back in November 2017 and kept in touch with the community and contributors until Christmas. His last commit was on the 3rd of January 2018, always committed until the very end of his life.

commit ea974d725ed1d5d5ffa1461bef7986168819dee3
Merge: e650e06 bd42cc7
Author: Shawn Pearce <sop@google.com>

Date:   Wed Jan 3 01:06:03 2018 +0000
    Merge "Prolog cookbook: attribute success to uploader, not author"

He never disclosed publicly his illness and how severe it was. He lived his life with the best of his ideas and passions till the very end, with his family, his project and all the things he loved the most.

Shawn Legacy

When great leaders like Shawn are leaving, there is always the danger to leave a void. I believe it is not possible to replace Shawn because he was a genuinely unique leader in its style and passion. What he has created, however, is a vibrant and healthy community that is out there and will continue with the values that he taught us with his professional and personal inspiring life.

What Shawn has created he defined a “healthy live project” which will be able to continue beyond his life and our existence. See below the 10 years of the project in numbers.

GerritCodeReview.numbers.png

Gratitude

We are all grateful for Shawn has done to us and all the future contributors and adopters of Git, JGit and the Gerrit Code Review.

I am a lucky person to have known and worked with such an inspiring leader. Thanks, Shawn.

 

Gerrit User Summit: Multimaster at Qualcomm

My name is Martin Fick and I work at Qualcomm as Gerrit Administrator and I have one of the key historical Gerrit Maintainers since its very beginning.

I am going to talk about my experience in introducing a truly Multi-Master Gerrit setup at Qualcomm and learn about the findings and the experiences we made to make that happen.

Let’s start with a question so that we can make this session very interactive from the start.

Q: Who would like to have multi-master in Gerrit Code Review? Why would you need it on your server? Failover? High-availability? Both?

A: All of that but also latency. We have developers all around the world, and we would have at least two datacentres, one in the States and another one in Europe, we were looking for a multi-site multi-master to descreen latency between the sites.

A: (Luca Milanesio – GerritForge). Scalability and elasticity that allows to grow and shrink the instances based on the incoming traffic. Additionally, we would like to have zero downtime and rolling upgrades without stopping the services when moving between minor versions of Gerrit.

Q: (Han-Wen Nienhuys – Google) Let me reverse the question: who is running multi-master and would like to have a single master?

A: (Dave Borowitz – Google) I would love if we had a single server that was powerful enough to serve all our traffic and no latency and we had not had to deal with slow database backends and replication lags between sites.

Q: Does anybody need multi-master? How are you surviving today if you needed it?

A: Right now we maintain a second master server for DR purposes, is a passive standby backup server. If our master went down, we would have to failover to it.

A: We would need multi-master soon. Currently, we are experimenting the HA-plugin as a fallback solution, but we are working towards achieving multi-master so that we can scale up, which is mainly our primary concern here.

Gerrit Multi-Master requirements

There are needs that I hear from the audience, high-availability, scalability. Not necessarily the same requirements and I believe that multi-master would address some of those. What’s important is that everyone is focussing on their needs and think about how they can approach those first instead of going to the “big everything” solution.

We have high-availability as an issue somewhere, so we want better availability. We have some scalability issues as well. We do have some site issue at times but we most generally deal with those through slaves all over the world, between 50 and 100 and we primarily have one data-center in San Diego USA. If that goes down, nobody is getting anything done even if Gerrit is up, that makes failing over to other sites not particularly useful right now.
Our current focus is on better availability and better scalability at one site.

Scalability problems at Qualcomm

We have a pretty hefty load, we have around 6k projects, but our load is enormous on a few large projects. They are not massive projects but they are significant from the standpoint of having many changes on them, and there are a lot of users using those projects. Mainly a few kernel projects are used, and even if there are many other little projects, most of the users are pushing to those single projects.

Screen Shot 2017-12-18 at 14.05.47.png

We are a single tenant server, so we are primarily dealing with that. Other people need other Gerrit servers, and they manage their own, but at the moment we don’t have to deal with those. However, if we had a more scalable system, then I believe multi-tenancy would become the next problem on the horizon. If I had a great scalable system why am I creating this instances here and there I would instead put all of them on the central system so that I can manage one system. But then I would have the problem that Gerrit doesn’t currently handle multi-tenancy well. Maybe is because we don’t have a multi-master system and then we are not ready to put everything on one server anyway. If we are starting having a multi-master solution, then multi-tenancy is then something that would become more interesting.

On one server we have around 6k projects, the main project has about 1/2M refs, and we have in total 2.3M changes on the entire server.

I like to break down the scaling topic into two different ideas: horizontal scaling and vertical scaling. In our area, we need to scale vertically more, while horizontally is where you have lots of projects and users and vertical they are more concentrated on a few projects.

Stairway to Gerrit Multi-Master

Here is what we decided to do and we decided to build it incrementally. These are the four phases in which we have approached it.

Screen Shot 2017-12-18 at 14.07.06.png

Step #1: Active/Passive

We have an active/passive standby, and we started doing regular failover. Let’s face it, if you have a failover machine that you probably have never failed over to, it would probably not work.
We have so much software around the Gerrit ecosystem, we have a lot of scripts, cronjobs and things that run on our server, and they use a lot of software, there are a lot of Python hooks and stuff like that.
All that software gets updated. If the failover machine is not updated through the same process and tested, there could be missing dependencies, wrong package versions, etc. Maybe some hacks or links on the filesystem that is pointing over here and not over there or maybe filesystem positions are different and things like that.

Before even dreaming about multi-master you need to have a system to have your server systems consistent. If you don’t think that your organization can do that and make two systems precisely the same, you are not going to be anywhere near ready to start doing multi-master. If you are a large organization that has a lot of history, it’s a more significant challenge. If you are starting from scratch it is a lot easier, think about that from the start: how can I focus on repeatability and deploying the same stack of software to multiple servers, and then concentrate on failing over.

Sometimes this year around February/March, we started failing over, and now we are trying to fail over weekly. We have been doing that for getting the Team used to it, to ensure that our processes work.

Step #2: Active/Hot Standby

Then we moved to hot-standby. The difference here is that the passive node had all the software in place but was not necessarily running. So now we just started the other server, but only nobody was pointing to it. The active standby gives you the opportunity to test the software you have around your Gerrit master. If you have hooks or cronjobs, try to run them on the failover server, even if it is not the active master. Do you have the coordination needed? Do you have locks in place? If you are doing repository repacking, can you do it on both servers at the same time? Do they conflict with each other?

The assumption is that you have a shared repository on the backend such as an NFS and thus activities like repacking need that coordination. And you need that stuff to work before you can go to multi-master.

Step #3: Active/Active with Round-Robin DNS (low tech)

The third step which is where we are at is active-active. We chose this as a simple low-tech and easy to backup solution: round-robin DNS. It sucks from a load-balancing standpoint and for the failover recovery, but it gets us having traffic going to two servers. We have been running that for quite some time now, 15h!

Step #4: Active/Active with Load Balancer(s)

And the next step is of course to have a load balancer. We have a load-balancer setup that we use for slaves. It is much harder to back out because you have to set IPs to have DNS in place pointing to them, but we plan to transition to that.

What Gerrit data to share in a multi-master setup?

In a single-site Gerrit multi-master setup, you necessarily need to have shared Git repositories, you don’t have to share your H2 caches because you need them on each of them and then the ReviewDb needs to be shared until it goes away with Gerrit Ver. 2.15.

Screen Shot 2017-12-18 at 15.09.10.png

Sharing sessions

There is other stuff you need to share, like web sessions.
When you log in on with your web-browser, and you hit one machine, your browser keeps track that you are logged in with a cookie on your browser while the server keeps track of it on a shared persisted session. If your next hit goes to the other server, you don’t want that user to be kicked out, so you need to share the server session data across nodes.

In 2014 we developed the websession-flatfile plugin, and that helps taking care of sharing web sessions, and I know that a lot of people used that already for HA solutions. It is straightforward, just stores the sessions on the filesystem and if you are using Git repositories on NFS, just put them there also and share them.

Sharing events

The other thing that you require to share is the events data. Our events need persistence on the server side, and we share them on the filesystem too. That allows to share them across masters also. If you connect to one of our masters through SSH, you get events from both of the masters. It’s a poor man’s sharing, and it is a simple set of files that is there and is shared. One thing that we realized using multi-master is that possibly your events are going to be disrupted much more. Even if you think you have high-availability, the client can only connect to one server. So it doesn’t matter if the cluster has even ten servers, the client is still attached to just one of them.

Generally speaking, most of us want to do rolling upgrades, and that would bring one server down, and that is one of the reasons we want to go to multi-master. If you are doing that frequently, you are disrupting your SSH clients and for those who are permanently connected such as the stream events even more.

I have contributed a new stream events plugin, and I merged it upstream during the last hackathon so you can download it if you want to look at it that helps you to store stuff. You get the added benefit to add some flags to get IDs on your events, and then replay IDs. Since they are stored on the filesystem, when you connect you can tell what the last ID was you connected to, and you can start receiving all the events after that. It does it through the same interface of the old stream events: you drop it in, and it works. It respects Gerrit permissions, just like the core plugin does so that the user that is connecting will see only the events of the projects and branches he has access to.

Coordinating other processes on the Gerrit nodes

As processes on our Servers, we have the Gerrit JVM, but also we have a lot of hooks, some of them run for a very long time. You need to get those to run on both servers. We have a few cronjobs that do repack, we have others that are checking if users are active or inactive on LDAP and update their status on Gerrit and check their e-mails to make sure that they are really what they are saying. And last we have some various maintenance cronjobs.

We focussed on the background stuff first because you need to get that done anyway and you can use it as a peace mill, and we tried to coordinate those first.

We went for the low-tech solution, sharing as much as possible via the filesystem.
Years ago I created a lock implementation based on the filesystem. It is similar to an echo PID to a file but is a lot more robust and can recover deadlocks as long as he can contact the other server using SSH and does all of that automatically.

We use that as a primary method to coordinate things, so on top of that, I have built a scripting queue executor for hooks. Years ago before multi-master, we had issues with hooks that were running for too long. If you remember how Gerrit manages hooks, runs from the Java process with a queue that runs only one at a time. If your hooks take for instance three minutes, you may build up a backlog on your server. If your node goes down, you lose that backlog. What we decided years ago is to have the java process just to write the hook onto a file, and then we have a schedule that runs them in the background. The queue in Gerrit is then always at zero, it takes less than a second to write to the file and then it is done.

We made that multi-master friendly, but because that queue uses the locks, it just worked when we run them on more than one master. One server can write the hooks to a file, and the other could be the one that runs it, and thus it distributes the load in that way.

Gerrit cluster events

If you think about it, there are a lot of things that are hooked into the startup, so I just put a little hook framework that checks if the servers are running and check if the other nodes are running by ssh-ing into them. That allows identifying that if the two servers are down and one goes up, then your node is starting, but also your cluster is starting. And if the other just comes up you are not starting the cluster, but you are starting the node. Then if one node goes down, you haven’t stopped the cluster but you’ve stopped only the node, and if the other goes down you have now stopped the cluster.

I have created a little hooks framework so that you can plug into those events, and then you can do things based on that. For example, at the startup of any node, we are going to start the hooks just to make sure that they are running. But then when you stop a node we disable them only when all the cluster goes down. If one node goes down, the hooks need to be still running because they can contact the other server because they go to the domain name and not to the local server.

We plan to use the hooks framework for replication too. Currently, if you are familiar with the replication plugin, at startup try to replicate all the project to all the slaves to see if they are up to date. The theory behind it is that data could have been modified behind Gerrit back and possibly because when the server was shut down, it was in the middle of doing something and never finished it. In a cluster situation, if at any time something goes down, it could have been in the middle of replicating, and the other node won’t know. When any server goes down, you don’t want to replication, but there is no need to do replication at startup when any node starts but only when the cluster starts.
If there is at least one node up, when the other comes up you don’t want to do global replication. You could replicate it, but it would be a just extra load.

Caching across the cluster nodes

You can do some coordination better than the simple filesystem, for instance, the high-availability plugin does one to one connection, you need to configure the IP of the one master to connect to the other and they connect between each other. That works for two, but it is not a super-scalable system, it is an N-squared problem. Eventually, you want to move to a pub/sub system for things like that.

Something I haven’t mentioned yet it is what we had to do coordination wise it is caching. The high-availability plugin evicts caches, but we don’t do that yet, we consider it an optimization. The primary remedy is just to reduce the time on the caches, mainly decreasing to the values that we had on our slaves. Mostly you probably have lower cache time on your slaves, to make sure that you are checking your projects’ ACLs and your group memberships a little more often than your master. The Gerrit master knows when things get changed without the need to evict any cache, but the slaves do not know when they changed. We made the masters a little bit dumber like the slaves because they know when they changed something but they don’t know when the other masters did. As long as you are fine with the delay, let’s say 5 minutes, then it means that when you change an ACL, it may take up to 5 minutes to be seen by the other masters.

Demo: make your laptop a multi-master Gerrit server

As anybody tried to run two Gerrit instances on the same laptop?
The first problem that you will see is that if you point them to the same H2 caches it will just not start. But if you run init on two different directories and you don’t share any data on the two masters, then you need to go and modify your gerrit.config file to say to start sharing something.

Out of the box sharing abilities

You need to reconfigure the Database to be the same, and you should use a PostgreSQL or MySQL because the default H2 won’t allow any sharing. I am running 2.14 here, and I did not share the indexes, so they are going to be out of date for this demo. The user’s sessions are not going to be shared, so you need to configure them manually to make them shared.
If you use the websession-flatfile plugin, it will put the sessions on the filesystem and then you can share them on the filesystem.
Lastly, the stream events won’t be shared, but most of the rest will just work.

I am about to run stream events on both masters, on the left screen I am connecting to port 29418 and on the right screen on port 29419.
Now if I go to the WebUI, on port 8080 I have the same master of the one on the left, and on port 8081 I have the one on the right.

I am going to create a project using the master on the left: I hit create and you can see that the events appear on the stream events on 29418 but not on 29419.

Now let’s try to go to port 8081, and I have to log in again because the websessions are not shared yet. I am showing to you what it does out of the box. The events now came on the right screen but not on the left one.

The two masters are just pointing to the same Git data: I am trying to make them multi-master, but so far it is not working very well.

Adding plugins to share events and sessions

So, let’s step in and start deploying some new plugins. What we did with the plugins is to create a new one called “events” so that when you want to get the distributed events, you just call “events stream” where “events” is the name of the plugin and stream is the command.

What we did in the core is to remap the actual “gerrit stream-events” to “gerrit stream-events-core” so that you can still get to it if you want and this helps for debugging. And then we pointed the “gerrit stream-events” to “events stream” so that users will automatically get the new plugin events so that to them in invisible.

I am now on 8081 on the browser over here, create a new project called ‘Luca’ and here we go, events are generated on both 29418 and 29419 ports. You may have noticed that showed up on the right first and then to the left.

Screen Shot 2017-12-18 at 15.11.51.png

The events are on the filesystem, so the server just have to pull and here I am pulling every one second, but if something happened on this server, it will automatically catch-up on all the events generated on the other server as well.

The pulling is a backup: the events should come up even without a pulling mechanism. If we eventually had a pub/sub mechanism, I would still suggest keeping the pulling. Events are inherently unreliable and will get lost: you need a pulling mechanism in place if you want true reliability. Pulling is not great, but it is reliable. Events are effectively an optimisation for speed.

Let’s go back now to the other server at port 8080, and I did not have to login because the sessions are now shared and kept across masters. I create a ‘hugo’ project, and it came up pretty quick on both stream events on both masters.

So we have now shared web sessions and shared events. That’s in 2.14 but to make it a true multi-master you would need to share your indexes as well but as we are still on Gerrit v2.7 at Qualcomm, we don’t have an index and we don’t have to share it.

More exciting features in the events plugin

We’ve been using the events plugin in a single-master scenario since early this year, and it has been pretty reliable. I can show some other exciting features are in there. There are some new options here, IDs and resume after. If I give –ids and –resume-after 0, then it returns all the events since I started the instance, and I initialized it. Even if the server started and stopped a bunch of times, I still have all those events recorded. The IDs of the event have two parts of it: the left part is a UUID, and a right portion is a number. The idea being that this is your file store and this is your event store, a unique id associated with it. If someone deleted it on disk, it would create a new one. The idea is that if you had event 1M and someone has removed the file store, and you ask after event 1M, the new event store will restart at zero, and you get nothing. By having different Ids if you ask “give me all the events after ID 1M” then the UUID changed, and it realizes that you need everything. That allows giving some little extra safety. The plugin works on Gerrit v2.14, and I made some changes to make it work on v2.15 as well.

Screen Shot 2017-12-18 at 15.12.54.png

Q: If you use Hugo’s high-availability plugin for the index, you basically can probably do multi-master right now. Why then are you guys not running multi-master? Are you guys concerned about NFS and ref-updates?

A: (Hugo Ares – Gerrit Maintainer) We use it in failover mode for the only reason that if you just have a little bit of load, you are safe and consistency will be kept. But if you push it a bit more then sharing Git repositories from different machines in write mode doesn’t work. If you have two computers writing precisely to the same Git repository to the same branch to the same file at the same time, you are going to lose history, and we know because it happened for us. That’s why we don’t do multi-master right now.

We have been writing for several years to the same repositories from different machines using NFS and we haven’t found any issues because we do repack all over. Perhaps we have different NFS settings or just a different implementation.

A: It did not happen that often, but at least a couple of times and that’s why we don’t do it. We mainly use the high-availability plugin for evicting the caches so that we don’t need to lower the cache settings because they are removed automatically and takes care of the events in a slightly different way and trigger the reindex of the other copies. We can keep nodes down during the day, and the people would not notice any difference at all, no deal. We are just a little bit reluctant to write from both nodes, but that’s the plan.

So here you are already on a “multi-master ready” setup. We hear about your reports of the JGit problems, and we will be keeping an eye on it. I am confident we can fix any issues that could show up over here, understanding how Git works on NFS. There are some tricks that we can pull from our code base.

The (in)famous NFS stale file handle bug

I know we fixed in the past a bug on JGit to handle NFS: when you delete a file, and someone else tries to access it from a different node gets a stale file handle error. That’s the main problem you are running into and we encountered and fixed some of those issues on JGit, and there are possibly a few more left here and there probably. The workaround to the stale file handle problem is just to make a copy or a hard link to it and keep it for a bit more time and remove it later, and then you are fine. Generally speaking is just the brief moment after the operation that would be a problem. It is what it would happen on a local file system anyway: if you delete a file that was still opened by other processes, the inode will stay there for a while until it gets unreferenced. That is the main issue we have been running into, and most of the other problems have been already fixed on JGit.

Distributed GC on a multi-master setup

We have a distributed Git GC and repacking relying on the SSH filesystem lock mechanism that I talked before. We basically have a list of repos to be GCed on the filesystem. All the nodes are saying: “I am going to this,” and another node says “Oh, good, then I am going to do this other one” and in this way, the GC load gets fairly well distributed and you get only a few hours to get through the whole thing. The lock allows preventing to step on each other toes when doing repacking. As far as the backup is concerned, we do a rsync and take database dumps, so that we don’t get inconsistent snapshots. You need to get your Database backup first and then the repositories next. If you are running an older version of the DataBase backup, you are generally safe because you don’t risk to have references to inexistent objects.

Questions

Q: I may have missed one point: the shared filesystem approach is assuming you are not doing multi-site, right? If you were doing multi-site, it would add extra latency that would introduce more problems and add more latency which was the problem you were trying to solve.

A: Yes, that’s correct. We are not interested in multi-site because there are a lot of downsides to it: it is a lot more complicated and makes your writes a lot slower while the reads would be faster. When you are pushing to gerrit-review.googlesource.com you realize that is not the best experience. You need to go to a quorum to the majority of the people around the world to get the agreement of the majority of them to finalize a git push operation.
Single-site is the first initial step for us, multi-site requires a very different approach. You need to get the Git objects to be replicated and then having somewhere to store the refs that have to be shared without a system like Google’s database that assures that the refs are replicated around the world.
It is not impossible; we made some experiments on it. A few years ago Dave Borowitz uploaded a ZooKeeper implementation that does the refs sharing, but that only does the refs. Then you need to do something for the Git objects replication. You could use the regular Git replication to do that, but you need to come up with a bit of magical ref scheme that does the job behind the scene.

A: (Luca Milanesio – GerritForge) Last year at the Gerrit Hackathon in Mountain View, we presented a new implementation of the “missing bit” you were talking about. It an OpenSource project based on JGit and leverages Apache Cassandra for the Git objects, while uses the ZooKeeper implementation for the refs you mentioned. We are making a lot of progress on the project and it will soon be possibly THE solution for Gerrit Multi-Site.

Q: (Luca Milanesio – GerritForge) We found out and fixed recently in JGit a problem with the cache consistency: it was reported on NFS but had nothing to do with it. When JGit was not able to open a file for whatever reason, NFS latency, maximum opened files, whatever, then the packfile was removed from the list of packs because it was considered missing. Then JGit started to failing fetching objects from the repository, and the people were panicking thinking that their Git repository got corrupted. But then, why the other nodes accessing the same repository did not have the same problem? Then “magically” the problem disappeared when you restarted Gerrit. The problem has been fixed in recent versions of JGit but is still there for the version you are currently using as the basis of Gerrit v2.7. Did you find the same problem? How do you manage to keep up with the recent releases of JGit but staying on Gerrity v2.7?

A: We have an old version of JGit, but we have put our patches on it. I believe we discovered and fixed a bug like that years ago. We ported a series of patches upstream and we a set of performance improvements on our branch. However, it is hard sometimes to get stuff merged on JGit; there are not enough maintainers to even look at the thing.

Gerrit User Summit: Script plugins with Docker

My name is Luca Milanesio, and I work for GerritForge. My talk today is about plugins and how to create them using scripting languages.

Gerrit plugins, where it all began

My contribution to the Gerrit Code Review project started in 2011 with the introduction of plugins. To understand where we are coming from we need to back to those times when the project was just born a year earlier. Gerrit was mighty since its very beginning, and different companies that used and contributed to the tool had tailored the code base to their specific needs. When I joined the GitTogether conference in 2011, almost every user was talking about their fork of Gerrit. Forking is excellent especially in OpenSource because you can customise a project as much as you want and, we were all excited about the growing popularity of GitHub and forking was a popular concept. However, keeping a fork up-to-date is not as easy as you may initially envision. Moving on with the upstream releases is hard when you are working on a fork.

Back in 2011 when I was at the conference, I thought: “how can the Gerrit project evolve and grow if we are all working on forks?”. My way to convince the Gerrit Community to change that status quo was inviting Kohsuke Kawaguchi, the Jenkins CI project founder, to the summit. Jenkins CI is wholly based on plugins while the core does not do much: the plugins are making the whole thing work as a CI.

That was enough to convince the community that a change was needed and, during the next Hackathon in 2012, I wrote the initial version of the Gerrit plugin loader and the first “Hello world” Gerrit plugin was born.

The introduction of scripting languages

After two years, Gerrit had only 50 plugins. If you had looked at Jenkins, at that time they had over 600 plugins, ten times as many plugins compared to Gerrit. Writing a new plugin for Gerrit was still too hard for most developers and administrators.

To develop a new Gerrit plugin you needed to know way too many things and have many skills: a different build system (Buck and now Bazel), having a full development environment and all the required dependent packages.

Screen Shot 2017-11-21 at 09.38.07.png

We still had new plugins because some people went through the initial pain of setting up the environment. However, for a project to thrive, you need to get people together and embrace a diversity of skills to allow people to give the best of their knowledge.
Maybe the typical Gerrit admin is not a Java Developer, possibly could be more familiar with Groovy because the Ruby syntax is used a lot of DevOps tools. Others are more familiar with Python, and if you accept what they can contribute, the project can benefit from many more experiences from different people and backgrounds.

What does the community think about it?

Once I shared my ideas with the community, the feedback was great. However, different people with different backgrounds started asking to use very different languages, ranging from Scala to Groovy and Python. Then I realized that supporting one scripting language would not have been good enough for most of the people.

“Hello world” in Groovy

To give you an idea of how easy is to write a new plugin in Groovy, see the following example.

import com.google.gerrit.sshd.*
import com.google.gerrit.extensions.annotations.*

@Export("groovy")
class GroovyCommand extends SshCommand {
  public void run() { stdout.println "Hi from Groovy" } 
}

It is straightforward to write scripting plugins: put the above content in a hello-1.0.groovy file in the Gerrit’s /plugins directory and as soon as the file is saved the plugin is there and will be loaded in Gerrit within a few seconds.

The way that Gerrit recognize this file being a plugin is through its .groovy extension. The file name denotes both the plugin name and its version, delimited by the ‘hyphen’ on the filename. In this example the file hello-1.0.groovy identify a plugin called ‘hello’ with a ‘1.0’ version.

One warning about Groovy: it is a language that relies on Java Reflection for method invocation. Reflection is a capability of the Java Runtime and enables methods discovery which is handy to use but is slower than a native Java language.
The drawback of the ease of use of the Groovy language is the CPU cycles at runtime.

The beauty of using a scripting language for plugins is the speedup of the development cycle: as soon as you edit the Groovy file on the file system, the old plugin is unloaded and the new one loaded in Gerrit. The plugin development lifecycle becomes so much faster compared to the traditional Java application development.

Develop Scripting plugins using Docker

Slide01.jpg

Gerrit is provided as a Docker image on DockerHub. The ‘gerritcodereview’ organization has an image name called ‘gerrit’ with all the versions available denoted as tags since Ver. 2.14. Earlier versions of Gerrit docker images are available on the ‘gerritforge’ DockerHub organization.

In the following example I am running Gerrit 2.14.4 on Docker fetching the image directly from DockerHub:

docker run -ti -p 8080:8080 -p 29418:29418 gerritcodereview/gerrit:2.14.4

In the above example, Gerrit is exposed through HTTP on port 8080 and exposes its SSH interface at port 29418.

Docker is a system that allows running containers, which are application “packaged” with everything needed, including other components of libraries of the underlying operating system. The only requirement on your physical host is the Docker engine, which exists nowadays for MacOS and Windows other than Linux where it was originally designed. Whatever operating system you are running on your laptop, Docker is there.

Docker can be handy for all the contributors that are not familiar with Gerrit Development Environment. There is no need to know or install anything on the local box, other than running the Gerrit Docker container. When I am running Gerrit in this way in this example, it starts straight away, with zero installation steps or configuration.

Gerrit out-of-the-box experience

The second significant value of the Gerrit Docker container is that includes an out-of-the-box configuration, a welcome screen, and the plugin manager. It consists already a set of components that, if you are not familiar with Gerrit, will help you a lot to understand what is Gerrit and how to use it.

As you can see from this screen, Gerrit has started, and if you navigate to http://localhost:8080, it shows you an initial welcome screen.

Screen Shot 2017-11-21 at 09.42.10.png

Historically the very first screen, once you have installed Gerrit, was a blank screen. I remember a few years ago people coming to me saying that as new Gerrit users they were quite confused: they just did not know what to do with the initial blank screen. In Gerrit Docker, the initial screen is a “Welcome” which is a beautiful thing to say to people that you did not know that came to your house. Additionally, it provides some useful links and information to install plugins, which is very important because Gerrit without plugins is missing some fundamental parts of its functionality.

Playing with Gerrit Plugin Manager

By clicking the “Install plugins” button, you reach the Gerrit Plugin Manager screen. For all of those who are familiar with Jenkins, it provides precisely the same functionality as in Jenkins. If you type ‘groovy’ in the search bar, you can easily find where the Groovy scripting provider is, and you can install it with a simple click. That is the plugin you need to tell Gerrit that from now on, every file in the /plugins directory with a .groovy extension is a plugin that needs to be parsed and loaded at runtime.

Screen Shot 2017-11-21 at 09.44.14.png

You can discover and install other plugins as well. For instance, typing ‘github’ would list the integration of Gerrit with GitHub authentication and pull requests, or typing ‘jira’ would return the association and workflow integration with Jira Tickets.
The plugin manager is a fantastic discovery mechanism to understand what are the integrations available for Gerrit Code Review.

The plugin manager automatically discovers the versions of the plugins that are compatible with the Gerrit you are currently running and, when you click ‘Install’, it downloads them and installs them locally. When you are done, just click on the top-right link “Go To Gerrit” and you are straight into Gerrit UX.

How we have a running Gerrit instance that has installed all the plugins I need, including the support for Groovy plugins.

Writing plugins in Scala

If you need want to leverage the Gerrit scripting plugins, but you need optimal performance at runtime, you can use a different scripting language such as Scala.

GerritUserSummit-2017-Scala.png

The Scala language allows compiling into the native Java bytecode; it does not use reflection for method calls and, for some operations could be even faster than the Java language itself. See the same hello world example but rewritten in Scala.

import com.google.gerrit.sshd._
import com.google.gerrit.extensions.annotations._

@Export("scala")
class ScalaCommand extends SshCommand {
  override def run = stdout println "Hi from Scala" 
}

When I showed this to the community people got so excited and started writing tons of scripting plugins.

What scripting plugins do in Gerrit?

Admin tasks as SSH commands

Sometimes Gerrit admins need to automate specific tasks, however, coding an external script could be slower and difficult to implement. Inside Gerrit, there are already a lot of objects which represent pre-processed in-memory entities ready to be used. It makes sense to leverage all the information that is in-memory already and write new SSH commands like Scripting plugins to control admin tasks remotely.

Scripted REST API

At times you need as well to tailor existing Gerrit REST API to your needs. For instance, imagine that your company has specific policies for requesting new repositories: why not then creating a new ‘Create Project’ REST API tailored for your needs using the Scripting plugins and expose it through a company HTML form? You can do it without the need to be an experienced Java or Gerrit contributor and using a simple Groovy script for the new REST API.

Low-footprint hooks events

A third option is fascinating because, before the introduction of Gerrit plugins, the only way to react to Gerrit events was through hooks or stream events. Hooks are a traditional Git mechanism and, in Gerrit, have a scalability problem: they are invoked for every project and every event that happens anywhere and spawn a different asynchronous process. Over time the extra processes created can cause a significant overhead for your super-busy Gerrit server.
When a hook script needs to read from the Git repository, it would then need to process from scratch the packfiles from the local filesystem, uncompress and parse them in memory over and over again, which could slow down your server significantly.
If you are implementing Gerrit events using plugins, the same processing could be ten or even hundreds times faster.

 

 

 

 

Gerrit User Summit: What’s new in 2.15

This week we are honored to publish the amazing talk of Gerrit Ver. 2.15 presented by Dave Borowitz, the “father” of NoteDb review format. It is the very first time that a full roadmap of the migration from the traditional ReviewDb (a relational DBMS) to NoteDb (a pure Git notes-based review store) is drawn and explained in all details.

First steps with Gerrit Ver. 2.15 using Docker

For all of those who want to experiment what Dave has presented at the Gerrit User Summit, there is a Gerrit Ver. 2.15 RC2 Docker image already published on DockerHub on https://hub.docker.com/r/gerritcodereview/gerrit.

See below the simple instruction on how to start a Gerrit 2.15 locally and playing with it.

Step 1 – Run Gerrit Docker image

$ docker run -h localhost -ti -p 8080:8080 -p 29418:29418 gerritcodereview/gerrit:2.15.rc2
...
[2017-11-14 23:12:08,016] [main] INFO  com.google.gerrit.pgm.Daemon : Gerrit Code Review 2.15-rc2 ready

Step 2 – Open Gerrit UX

Open your web-browser at http://localhost:8080 and you will automatically get into the plugin manager selection. At the moment only the core plugins are available, so you can just click on to top right “Go To Gerrit” link.

Screen Shot 2017-11-14 at 23.19.38.png

As soon as Gerrit 2.15 will be officially available, you would be able to discover and install many more plugins on top of your installation.

Step 3 – Create a new Repository

Click on the new top “BROWSE” menu and select “CREATE NEW

Screen Shot 2017-11-14 at 23.20.48.png

Then insert the repository name (e.g. “gerrit-playground”), select “Create initial empty commit” to True and click on “CREATE“.

Screen Shot 2017-11-14 at 23.22.25

Step 4 – Clone the repository

From the repository page select the “HTTP” protocol and click on the “COPY” link next of the “Clone with commit-msg hook” section.

Screen Shot 2017-11-14 at 23.24.56.png

And paste the command on your terminal.

$ git clone http://admin@localhost:8080/a/gerrit-playground && (cd gerrit-playground && curl -Lo `git rev-parse --git-dir`/hooks/commit-msg http://admin@localhost:8080/tools/hooks/commit-msg; chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
Cloning into 'gerrit-playground'...
remote: Counting objects: 2, done
remote: Finding sources: 100% (2/2)
remote: Total 2 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (2/2), done.
Checking connectivity... done.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4695  100  4695    0     0   448k      0 --:--:-- --:--:-- --:--:--  458k

Step 5 – Create a new commit for review

Enter into the new Git cloned repository “gerrit-playground”, add a README.md file.

$ cd gerrit-playground && echo "Hello Gerrit" > README.md && git add .

Then identify yourself as admin@example.com, which is the default Admin e-mail for the Gerrit 2.15 Docker image.

$ git config user.email admin@example.com

And finally create a new commit and push it to Gerrit using “secret” as password

$ git commit -m "Say hello to Gerrit" && git push origin HEAD:refs/for/master
Password: secret
[master b4de540] Say hello to Gerrit
 1 file changed, 1 insertion(+)
 create mode 100644 README.md
Counting objects: 3, done.
Writing objects: 100% (3/3), 303 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: Processing changes: new: 1, done    
remote: 
remote: New Changes:
remote:    Say hello to Gerrit
remote: 
To http://admin@localhost:8080/a/gerrit-playground
 * [new branch]      HEAD -> refs/for/master

Step 6 – Review your Change in Gerrit

Open the Gerrit Change URL mentioned in the Git output (http://localhost:8080/#/c/gerrit-playground/+/1001) in your Web Browser and you can review the code using PolyGerrit UX.

Screen Shot 2017-11-14 at 23.37.43.png

Step 7 – Approve and Submit your Change

Click on “CODE-REVIEW+2” button on the right toolbar to approve the change and then press “SUBMIT” to merge into the master branch.

Screen Shot 2017-11-14 at 23.40.45.png

What’s new in Gerrit Ver. 2.15

Hi, I’m Dave Borowitz, I work for Google, and I’m going to talk about what is new in Gerrit 2.15. We have a six months release cycle, and it has been around 158 days since Ver. 2.14.0 was released in April, and now, just last night at 10 PM, I have release 2.15-RC0.

Gerrit Ver. 2.15 in numbers

Gerrit Ver. 2.15 is aligned with other recent releases:

  • 2,102 commits
  • 53 contributors from all around the world and different companies
  • seven new contributors who have never commit before

I don’t know if any of the new contributors are in this room and if they are, thank you, if not I will thank them remotely.

PolyGerrit frontend

There is a bunch of new stuff going on in Gerrit Ver. 2.15, I will talk only briefly about the frontend, because we had an excellent talk from Logan and Arnab and Justin this morning about what’s happened recently in PolyGerrit.
I want to give you my perspective as a developer that is doing mostly backend work. It’s fascinating to have in Gerrit a modern JavaScript environment to develop in because it has been historically difficult for us to attract frontend developers in the project. There are not just enough GWT developers in the world as there are JavaScript developers and I think that the Gerrit project has suffered from a lack of UI attention in the past because of that.

But now we have a whole team here at Google, we have a lot of external contributors, that are happy to work on modern JavaScript, and PolyGerrit has made some excellent strategies of improving the user experience.

Screen Shot 2017-11-14 at 23.51.35.png

PolyGerrit is also usable. Everyone that has been working in software development always has the attraction of starting from scratch and say “how would have done it differently, “: PolyGerrit follows a little bit that approach.

We understand that we have users that are used to a certain workflow and we have to preserve it. However, when you are writing new code, you have the opportunity to learn from your past mistakes and improve things and maybe make it easier to onboard new people.
There is a little bit of preview of what PolyGerrit looks like in Gerrit Ver. 2.15; I had to cut the screenshot last night to make sure that it would be entirely accurate.
You can see that there is a reasonable rate of improvements, even in the last 24h and I hope that we will keep this peace of development.

Screen Shot 2017-11-14 at 23.52.44.png

Backend

From Ver. 2.15 we support NoteDb, and we suggest that you should convert your existing ReviewDb to NoteDb.
What is the motivation for doing NoteDb? Until now the Gerrit administrators had to be DBMS Administrators as well which is a sort of weird feeling: you care about version control and software development workflow, and, for some reasons, you should take care about being a DB Admin?

The idea behind is that we carry on the data historically stored in ReviewDB and we move it directly into the Git repository.
When you create a new change with a patch-set, a topic and all its meta-data, including comments and reviewers then everything gets stored in the same Git repository with the rest of the committed change data. There are several advantages in storing data and meta-data in this way.
You don’t have to worry about two stores that somehow are going to get eventually inconsistent. If you have to make a backup of your Gerrit Server, you have to take a Database dump and then to have a backup of all of your repositories. If you want to have a consistent backup where everything that is stored in the Database exists in the Git repository and vice-versa, you have to do it while the server is down. There is no other way to do it because you may get new records in the database while you are backing up the Git repositories and you would not see that data reflected.

With NoteDb, it’s all in the Git repository, and it is even better because we changed JGit to be able to write to multiple Git repositories refs atomically atomically. This means you can submit a change and also submit the status of the meta-data at the same time. When you upload a patch-set, we update three refs, either all of them succeed, or all of them fail. There isn’t a state where the change was merged in the branch, but the ReviewDb wasn’t updated, that is just no longer possible. That enables us to have consistent backups.

NoteDb provides a very helpful audit log. We had a lot of data issues in the past where could not understand how a change got into a particular state because in ReviewDb you just update a field with ‘X’ and you forget completely that the field was previously with a value ‘Y.’ In Git the model you append commits to a history graph, so you actually store every operation that has ever happened on NoteDb, and that gives you an understanding on how a change ended in the current state.

Screen Shot 2017-11-14 at 23.54.20

We are thinking about giving extensibility for new features, and this is a kind of optimistic view about the future, plugins will be able to add new data to NoteDb while it wasn’t possible for a plugin to add a new column to an existing table of ReviewDb. I don’t think that we have any plugins that are currently able to leverage this capability and we do not have any extension for it yet, but the data layer supports it.

Whilst is automatically giving new features such as moving changes between Gerrit hosts without having to throw away code review meta-data.

NoteDb roadmap

We never actually pictured in the past a complete NoteDb timeline, which spreads across five long years.

Screen Shot 2017-11-14 at 23.55.18.png

  • 2013
    The very first commit on NoteDb was in December, that was a very long time ago.
  • 2014
    We had an intern, and he wrote all the stuff on inline comments in NoteDb.
  • 2015
    I wrote a thing that it was at the end a good idea in retrospective, but it was a considerable amount of work. It is called the batch update and allows to have a coordinated transaction across two different data-store with a consistent interface. This period is what I called “rewrite every single line of Gerrit”.
  • 2016
    We started migrating googlesource.com; ReviewDb still existed, and it was always the single source of truth in case they got out of sync with NoteDb. Later this year, a few months ago, we moved everything to NoteDb and we don’t use the change table anymore. We have several hundreds of servers nowadays using NoteDb and generated several hundreds of changes to it. That is exciting, we have been running in production for months, and that’s why we believe it could work for other people to run in production.
  • 2017
    Last night, we released Ver. 2.15, the first version of Gerrit where we officially say “we officially support NoteDb and we encourage you to migrate away from ReviewDb.”
  • 2018
    We are going to release Gerrit Ver. 3.0 and Ver. 3.1. The reason for the name ‘3.0’ is because we will not have ReviewDb anymore. There will still be the code and the ability to migrate from ReviewDb to NoteDb, but you would not be able to run Gerrit on ReviewDb. The Ver. 3.1 is the one I am most excited about because in that version we do not have even to support a migration tool. We will then be able to throw away all the ReviewDb code, and that would make me very happy.

How to migrate from ReviewDb to NoteDb

I mentioned that migration for googlesource.com took us a very long time because we’ve found a ton of issues with our data. We discovered all these things while we were running our migration and we fixed them all.

We developed a system that was scanning all ReviewDb, performed an in-memory migration and then compared the result with the changes stored in NoteDb. One example of a bug was that some subjects were truncated in ReviewDb. The subject is supposed to come from the first line of the commit message. We were comparing the data in Git with the data in ReviewDb and they did not match because they were truncated. If we were to require that all the subjects in NoteDb were identical to the ones in ReviewDb we would have never passed because there was this truncation. We could have patched all the existing data but actually what we did is to consider that if the subject in NoteDb starts with the subject in ReviewDb it was then regarded as valid. There were many more bugs of that flavor.

There were also bugs in the NoteDb code that we fixed; it was not just like all related to not good data; my code was far from being bug-free. The reason why I am talking about how much effort we put in making it right is that I want you to feel confident and not think about that this is a so much scary operation on your data. We tested on ourselves, and we fixed a lot of these bugs, and we are still pretty confident that this is a safe operation.

In Ver. 2.15 there are two types of migration options: on-line and off-line. At Google, we are in an exceptional condition because we are always at zero downtime, but that was useful because it allowed us to write a tool for a live migration from ReviewDb to NoteDb while the server is running.

Migration to NoteDb is pretty much similar to the way you do reindex: there is an online reindex and an offline reindex. You can choose to do it offline, and it will be probably faster, but there will be a downtime. Or you can decide to do it online, and it will be slower, but there will be no downtime.

And then in Ver. 3.0 we only are going to support an off-line migration, following the same paradigm of all the other schema upgrades. If you skip between releases, we force you to do to an off-line update, but if you upgrade just one point release at a time, you don’t have to have any downtime for your schema migration. Similarly for NoteDb if you migrate from Ver. 2.14 to Ver. 2.15 and then Ver. 3.0, you won’t have any downtime.

Screen Shot 2017-11-15 at 00.00.20.png

Q: (Han-Wen) Is this process parallel?

A: It is parallel if you do it offline if you do it online it is using a single thread because we are assuming that your server is mostly busy doing other stuff and that’s why you may want to do an online migration in the first place.

Benefits of migrating to NoteDb

There are a lot of incentives to migration to NoteDb; one is that you have new features such as hashtags and others that we implemented only in NoteDb because they were a lot harder in ReviewDb such as the history of all reviewers on a change. NoteDb manages audit natively while on ReviewDb we would have needed to have a new table called reviewers_audit which would have been much harder to implement.

The robot comments introduced in Ver. 2.14, the ability to remove clutter in your dashboard to mark a change as reviewed, are all features that you only have in NoteDb.

What did we learn from migration to NoteDb?

Writing every single line of code just takes a long time, and Gerrit has hundreds of thousands of lines of code. Shawn Pearce, my manager, and the Gerrit project founder at Google, every time he needs to touch NoteDb related code just says “I don’t even recognize this, ” and he is still the contributor #1 in the project. We changed it almost beyond its recognition.

Everything I’ve said so far is about changes; there are also other data besides changes. Accounts have been unconditionally migrated to NoteDb in Ver. 2.15. Is more a git config file format for the accounts that we store in NoteDb, it is not even actually a Note-space format. The account is now a config file that has your name and your e-mail address and the status, which is a new feature in NoteDb. For instance, my account status says that “I having a talk in England”.

New Patch-set comparison

Hi, my name is Alice Kober-Sotzek, and I work at Google. In Ver. 2.15 we have changed the way we compared patch-sets. Let’s imagine we have just a small change with a patchset and two files on it. In the first file we have only the first line modified, and the file consists of one thousand lines. The second file has four lines changed.

Screen Shot 2017-11-15 at 00.02.32.png

Let’s see what happens now when I rebased it to the latest version of master. If I had now to visualize what the patchset 1 consisted of and patchset 2 consisted of, what would I assume it would be? If I had been the author of the change, I would have expected that only one line would have been changed. Let’s just do it and ask Gerrit Ver. 2.14 what the result is.

Screen Shot 2017-11-15 at 00.03.16.png

What’s happening? Why do I have 420 lines changed in my file and ten additions and seven removals on the other?
That was not even touched on. Let’s have a look at the content of my file and what is in there.

In Ver. 2.14 we were just hiding all the differences due to rebasing, and that was it. In Ver. 2.15 things look different though because we try to figure out what happened in between the rebase. All the hanks that we are sure are added by something else, are displayed in a different color. We have not only red and green but orange and blue as well; these are all the ones that were introduced by other changes that were in between rebase.

Screen Shot 2017-11-15 at 00.04.34.png

This feature only works in PolyGerrit, while in GWT was not shown at all.

Can I rely on that and trust what I see there?

The decision we made is that all the hunks marked with orange and blue are the ones we are sure of and you can safely avoid looking at them because they were the ones that happened because of other changes occurring in between rebase.

The ones marked with red and green, we give no guarantee. They could be introduced by other changes, because of conflicts or may be added by the patchset. With that coloring, it is much easier to look at the things that are important.

Screen Shot 2017-11-15 at 00.05.32.png

Killing Draft Changes.

Some of the people think that draft changes are very much a visibility thing so that only my reviewers can see them. Other people use it like a change is not yet ready for review so that I can leave it as draft change until it is ready for being reviewed. You may even just use the server as a store for your changes; rework the code through the Gerrit in-line edit feature until the code is ready and then come up with an absurd number of patchsets. Nobody wanted any of them, but those are the conditions that we ended up with patchset drafts.

Patchsets could have been even deleted so they would never exist. They could just be kept invisible so that you see a gap, but that could be the current patchset: the UI claims that the current patchset is three, but then I do some other operations that say that this patchset is not current anymore, just because the current one is a draft!

Screen Shot 2017-11-15 at 00.06.36.png

Drafts are a kind of mass fraud; the main reason is that they are colliding all these things into a single feature. In Gerrit Ver. 2.15 we killed drafts. Now you have little small features instead of drafts. You have now “Private Changes” which only you and your reviewers can see. There are Work-In-Progress (WIP) changes, that means that while the WIP flag is set nobody gets notifications about it: you can push 30 patchsets, and the reviewers would not get spammed with 30 emails. Last but not least, we introduced a long ago the Change Edit, which can be used as well in conjunction with WIP Changes.

Marking Changes as reviewed

Another thing that we introduced in Ver. 2.15 is the ability to mark changes as you reviewed it. For instance, the one below is a change screen from my dashboard this morning: some changes are highlighted in bold and those other changes are not. I feel like the bold changes are yelling at me and you have to give me your attention just like in e-mails where bolds means “you need to look at me now.” Gerrit Ver. 2.15 when you are using NoteDb allows you to unbold any of them by just clicking a button on the change screen. Or like in an email you wish to remove some changes from your dashboard entirely. There is a function that allows you remove a change unilaterally from your dashboard that the other cannot undo or ignore it, that just makes the change go away.

It was annoying that I could not mark them as reviewed manually and it was really irritating that patchsets disappeared. It is really irritating when I received a review with a bunch of comments, I had to say “done, done, done, done” on each one of them.

Screen Shot 2017-11-15 at 00.10.01.png

And then when I pushed a new patchset, I just forgot to submit all these drafts comments that say “done”. So I added just a push option that says “when you push, publish comments” and all the draft comments will be published automatically. So instead of clicking on all the patchset on that change and check if in any of the patchset I have any draft and if I do, click send on all of them one by one, I can instead just set an option.

Screen Shot 2017-11-15 at 00.10.52.png

It can be specified by the command line, but it is difficult to remember. So there is a user preference with a checkbox which I really encourage you to select in your user preferences screen and it is only available on PolyGerrit.

CCing a Change under review

When someone is getting a co-worker and they want him to be a reviewer for a change, you get an error saying that your co-worker is not a registered user. We have partially solved this problem by adding a CC with an e-mail address, also only available on NoteDb. There are technical and even product reasons why we don’t want to add them as a reviewer, some of them are related to the accountability related to everyone that is working on that change. So people needs to have an account to be a reviewer, but if people just want to look at it or a mailing list, it doesn’t have to be a real user, you can then just CC any e-mail to a code review if you turn the config option to allow this.

Screen Shot 2017-11-15 at 00.11.48.png

Better error messages.

Another inch that Han-Wen scratched was the introduction of better error messages. Sometimes you do a push, and it fails with a very unhelpful error message that says “Prohibited by Gerrit.

It turns out that it is not difficult to check if a user does not have a permission then gives permissions error, so we have included a message saying this user lacks this permission. It is not perfect, so it doesn’t say precisely where in the project hierarchy this permission was coming from but at least says “this is the problem”, it tells you not the solution but at least highlights where the problem is.

Screen Shot 2017-11-15 at 00.12.37.png

Robot comments and automatic fixes.

Robot comments are a feature that we believe it will start to rump up in adoption. With NoteDb, you can suggest fixes as well, and then you have a button that says “apply fixes” which creates a change that applies the fixes.

Many more improvements in the bag.

There is a lot of speed improvements in PolyGerrit, so the changes with a lot of diffs will run a lot faster. An admin can delete comments that really shouldn’t be there. We can explicitly keep track that a change reverts another change so that you can search if that change was reverted. It can even tell you if that was a pure revert at Git level only, or if other changes were sneaked in claiming that this was only a revert which happens way too often I think.
There are a better server consistency checks and a new plugin endpoint for dashboards; there is a new URL scheme as described by Patrick and we are now off the page for putting, even more, new features.

Screen Shot 2017-11-15 at 00.13.15.png

 

Gerrit User Summit 2017 – Sold Out

GerritUserSummit-London-Splash.jpg

The Gerrit User Summit 2017 this year is entirely SOLD OUT, a fantastic result and huge audience increment from last year, coming from 14 different countries.

It is essential for a healthy OpenSource project having a diverse and growing community of users that assures longevity and organic growth of features and ideas.

See you at CodeNode on Monday 2nd of October 8:00 AM and help shape the future of Gerrit Code Review, growing and thriving in the ecosystem of Git Code Review tools.

PolyGerrit User Experience at Gerrit User Summit in London

Google.GerritUX_Banner_200X85_cc.jpg

With only seven days to go, the Gerrit User Summit is approaching fast! There has been a lot of discussion on the Gerrit usability on a recent discussion thread on the GoLang project. More and more the focus of OpenSource communities is the ease of adoption and contribution, after of course the solidity and efficiency of the review process. A usable interface, clear and self-explanatory even for newbies, could contribute to the success of a project.

PolyGerrit, a fresh start from the Chromium project

In July, the entire Chromium project moved from Rietveld to PolyGerrit: this event has brought a lot more users to the Gerrit platform and triggered a creative and open debate on the future of Gerrit UX. Chromium’s unique workflow has been the driver of lots of improvements, some of which will be landing in Gerrit 2.15.

Logan Hanks from Google will present at the forthcoming Gerrit User Summit in London the discoveries and developments of the v2.15 PolyGerrit UX, mainly driven by the same people that are using it every day on the Chromium project.

A new visual design for the PolyGerrit project

There is a new visual design for the change view to present. You may have already seen some elements of this design rolling out to googlesource.com. Arnab Banerjee from Google will take us through the complete design and show us where it’s going in the coming weeks.

A PolyGerrit booth has been set up at the conference to allow anyone to experiment the new UX and go through some research trials to provide meaningful feedback for the evolution of the user interface.

Only ten seats left, it’s never too late

The Gerrit User Summit 2017 is almost entirely booked: HURRY UP AND REGISTER TODAY so that you can see and be part of the evolution of the Gerrit UX. Your user-journey and requirements can be part of the next version of Gerrit, be part of the change and part of the Community.

Beyond PolyGerrit, many more talks are coming

More exciting talks to come as well, including multi-master, a brand-new Jenkins integration and an exclusive Q&A face-to-face with the Gerrit maintainers. See the full schedule at the Gerrit User Summit 2017 site.