frequency of repository updates

News and Announcements related to GhostBSD
Post Reply
ASX
Posts: 988
Joined: Wed May 06, 2015 12:46 pm

frequency of repository updates

Post by ASX »

First, apologies for the long post kraileth's style post. :D

There is an open question about our repositories: the frequency of updates

because we said we will perform some testing, the time required to perform tests is a key factor: I would think that one week of tests will be a reasonable amount of time; we are not expected to catch all bugs, only the majors and if there will be no major breakage (X11, Libreoffice, firefox thunderbird ...) the repo could be considered good and moved to "current".

This is also a good chance to talk about the duration of the "previous" repo, it was said than when we move "latest" to "current" we also move "current" to "previous": on a second though, there could be cases where this is not desirable, in example in case of relatively urgent security updates we may want to move "latest" to "current" but leave "previous" as is". (most likely updating for security will leave no enough time to make proper testing).

~~~

A second factor that come into play, is the build capacity of our systems: I will refrain to describe the details here, other than the basics:

A Xeon E3-1245v2 4c/8t based system (our current builder) can build packages at a rate of 300 ~ 350 pkgs hour.
A Dual Xeon E5-2620v4 2x 8c/16t(dragonflybsd) can build packages at a rate of 1200 pkgs hour.

By using ccache, on our E3-1245v2 builder, the rate will be a bit better, in the range 400~550 pkgs hour.

Now, ccache is a good thing, can speed up the build by a factor of 4X~8X, but has a few drawbacks:
- it is used only for C/C++ programs
- each new source version will discard the current ccache files, and will rebuild new files
- each time a compiler is rebuilt (even when the compiler version is the same) all packages depending on that compiler will be rebuild and the ccache files discarded and rebuild.
- it make use of approx. 100 GB for each arch.

I'm mentioning this because we had good results from ccache use when we were building quarterly ports branch, but things went worse after switching to "latest" ports branch, and the reason is exactly because of ccache effectiveness, higher when building from "quarterly", lower when building from "latest".
In short, we can achieve higher build rates when using "quarterly".

Other things may affect the performance of course, (RAM, use of SSDs or NVME disks, ...) but in my opinion are not as important as the CPU and ccache, or if you prefer, using the current E3-1245v2 CPU and at least 32GB RAM, they are not significant factors.

~~~

Worth to mention also, that week by week, you could expect something like:
quarterly branch: 3000 ~ 7000 packages to be rebuilt,
latest branch: 10000 ~ 15000 packages to be rebuilt.

Assuming the guarantee performance (340 pkgs/hour) it take nearly 4 days to build a complete repo, the time required to build weekly updates can be 24 hours (quarterly) or 48 hours (latest), for each arch.

~~~

Now, that you have some info about our requirements and the constraints, you can express your view: what should we do in term of infrastructure ?

a) use a big powerful server, and may be release updates bi-weekly, one week amd64, one week i386
b) use two medium server (one for i386, one for amd64)
c) ... your ideas ...

Also, should we really build from "latest" or we will be forced to fallback to "quarterly" ?

And ultimately, how much frequently we will release updates ?

* I want the repos ready by the end of the month. :twisted:
Post Reply