More or less it required an entire week to build the first amd64 full repository; the build process was performed in 3 or 4 sessions for various reasons, mainly because the build was performed on the machine that host our web services.
Time to give you a few numbers:
packages built: 26116
packages failed build: 28
packages skipped (depend on failed above): 123
packages ignored: 542
USED STORAGE;
- amd64 repository: 72 GB (73512 MB)
- distfiles: 99 GB (100560 MB)
- ccache: ~ 35 GB
- ports 1338 MB
- log files 667 MB
NB: Those figures need to be amended, because files actually resided on a ZFS lz4 compressed filesystem, specifically log files will require much more storage, (log ~= 10x, ccache ~= 2x) .
building our repository (builder)
Re: building our repository
Next question, one week later the ports tree has changed, how many packages will be rebuilt ? Let see ...
Re: building our repository
ASX wrote:Next question, one week later the ports tree has changed, how many packages will be rebuilt ? Let see ...
Code: Select all
Total packages that would be built: 11435
The complete build list can also be found at:
/tmp/synth_status_results.txt
Re: building our repository
Everything thing s look alright unless 4 big ports build together.
If it was in dedicated server by itself on ssd only how much space you think we need?
Also how mush ram would you prefer 32GB or 64GB?
kraileth any news about the server?
If it was in dedicated server by itself on ssd only how much space you think we need?
Also how mush ram would you prefer 32GB or 64GB?
kraileth any news about the server?
Re: building our repository
well, among the 27000 pkgs, probably less than 200 are "big", that's why marino suggest to make use of swap, which is really, 26800 ports will be built on RAM, and the remaining 200 will make use of swap.ericbsd wrote:Everything thing s look alright unless 4 big ports build together.
Problem is, in my opinion, that those big packages will also use a lot of build time, we could say something like:
20% of packages will use 80% of the total build time
80% of packages will use 20% of the total build time.
That's why I like more his alternative suggestion: to create two "profiles", one to build the "big" pkgs, (say using 2 builders), and one to build the remaining, say using 4 or 6 builders).
32 GB OSIf it was in dedicated server by itself on ssd only how much space you think we need?
32 + 32 GB swap
50 + 50 GB ccache
100 GB distfiles
75 GB amd64 repo
75 GB i386 repo
10 GB logs
-------------------------
460 GB total, so 2 x 300 GB would be OK, or 3 x 200 GB would be even better.
(it is implyed that when the repos are ready, they will be transferred to the webserver for deployment), we will still setup a webserver on the builder itself, but only for the test-repos, to be used from developers and/or testers).
Honestly I'm asking myself the same question from days, it would be better more RAM or a CPU with more cores ?Also how mush ram would you prefer 32GB or 64GB?
It is difficult to answer, because more RAM would allow to use more parallel builders (say 8 builders on 64 GB), on the other side, with only 4core/8thread that could be already too much for that CPU.
Therefore, if I had to choose, I would be inclined toward a more powerful cpu instead of more RAM.
I'm quite perplexed about ccache (lack of) performance, and I'm starting to think that might be correlated to the fact that builders are chrooted each time at a different mountpoint (SL01, SL02 ...), yet to be verified.
ZFS, by featuring ARC cache offer great performance while building the local environment on tmpfs, in fact I have observed and measured little to no read disk activity while building lot of small packages (zpool iostat 3).
Generally speaking I'm also somewhat disappointed from ZFS performance, I mean when there are lot of files into the same dir (like the repo dir), there exists some performance penalty ... difficult to say if UFS will perform better, probably yes, especially if we will use UFS without journaling.
yes please, let us know about.kraileth any news about the server?
Re: building our repository
Not yet, unfortunately. I've been a bit out of luck with that, it seems. I had made an appointment to meet my boss last Tuesday - bet then he was ill. And now he's currently on vacation... I hope to get to discuss the matter around Thursday.ericbsd wrote:kraileth any news about the server?
However I've decided that I can probably support the server cost with a few Euros each month. I'll just have to convince my wife that this is an important thing.
Re: building our repository
Or simply divorce and problem solved!kraileth wrote:I'll just have to convince my wife that this is an important thing.ericbsd wrote:kraileth any news about the server?
Btw, I'm joking, of course! LoL
Re: building our repository
A little update about package building, synthj, zfs and ccache.
The ccache is a structure made by many many small files, say 1.000.000 files for 30 GB cache, it appear that ZFS doesn't deal that good with such many small files.
I made a little test on my machine, compared side by side ccache on ZFS and ccache on UFS, and the winner is.............................................. UFS.
Just to give you an idea, copying 50 GB ccache content fron ZFS to UFS proceed at a speed of 2 MB/s ... which is ridicoulous of course.
Therefore I changed the server setup to use ccache on top of UFS, and the UFS filesystem is built on top of a ZFS volume. As far as I can see it perform similarly to a native UFS filesystem.
Therefore I decided to switch the whole 'builder' filesystem on top of a zvol UFS
After that I restarted a complete build, anew, this time from 2017 Q1 quarterlly repository, we will see in the next few days how much performance improved.
The average using 2 builder x 4 kobs was approximately 130 pkgs hours .... will need to wait at least a few thousand packages are build to understand the performance gain.
Hope that info may be of some interest to you too.
The ccache is a structure made by many many small files, say 1.000.000 files for 30 GB cache, it appear that ZFS doesn't deal that good with such many small files.
I made a little test on my machine, compared side by side ccache on ZFS and ccache on UFS, and the winner is.............................................. UFS.
Just to give you an idea, copying 50 GB ccache content fron ZFS to UFS proceed at a speed of 2 MB/s ... which is ridicoulous of course.
Therefore I changed the server setup to use ccache on top of UFS, and the UFS filesystem is built on top of a ZFS volume. As far as I can see it perform similarly to a native UFS filesystem.
Therefore I decided to switch the whole 'builder' filesystem on top of a zvol UFS
After that I restarted a complete build, anew, this time from 2017 Q1 quarterlly repository, we will see in the next few days how much performance improved.
The average using 2 builder x 4 kobs was approximately 130 pkgs hours .... will need to wait at least a few thousand packages are build to understand the performance gain.
Hope that info may be of some interest to you too.
Re: building our repository
Women are somewhat complicated.ASX wrote:Or simply divorce and problem solved!kraileth wrote:I'll just have to convince my wife that this is an important thing.ericbsd wrote:kraileth any news about the server?
Btw, I'm joking, of course! LoL
Re: building our repository
I'm recording the following info for own use:
while building editors/openoffice-4 I have measured a total tmpfs usage of approx 14 GB (11.9 + 2.0);
editors/openoffice-devel instead failed exceeding the max tmpfs size. (currently 12 GB).
while building editors/openoffice-4 I have measured a total tmpfs usage of approx 14 GB (11.9 + 2.0);
editors/openoffice-devel instead failed exceeding the max tmpfs size. (currently 12 GB).