xorriso in ghostbsd-build

Open development discussions

Moderator: Developer

ASX
Posts: 988
Joined: Wed May 06, 2015 12:46 pm

Re: xorriso in ghostbsd-build

Post by ASX »

hi Thomas,
scdbackup wrote:Hi,

i set up that system 7 years ago. AMD with 4 x 2.4 GHz, kvm-ready.
If i remember right, the master boot loader is GRUB Legacy, installed and
managed by the old Debian system. It chainloads partition boot loaders.
I know that grub-legacy is much simpler, but grub2 introduced several extension to deal with more current filesystem, and possibly making simpler to extend it with future filesystem.
Main question is whether i should upgrade only FreeBSD or also the GNU/Linux
and pull NetBSD from qemu+kvm to real iron.
First part of any endeavor would be making backups of the current partitions
when they are not active.

All choices are up to you, depending on time and available hw.
Personally I would avoid virtualized environments in this specific case, the chance the that the vm layer introduce some side effect is very high, just my opinion.
I can use qemu+kvm on my Debians to pass a real DVD burner to a guest Debian
and then i can use it for SCSI command passthrough via the "vda" driver.
https://dev.lovelyhq.com/libburnia/web/ ... emuXorriso
But that's not good enough to get the kernel to handle the drive by the "sr"
driver. "sr" suffers from global mutexiosis. But we still have "sg" which
was not worth being mutexed back then, in the Big Kernel Lock Eradication
stampede. So i gave up my plan to test fixes for "sr" and rather made
xorriso ready for "sg", ye olde deprecated way of Linux 2.4.
https://dev.lovelyhq.com/libburnia/web/ ... entLinuxSr
Interesting, didn't knew about that, I also hope that the "per driver lock" (sound very weird just by its name, because implicitly exclude parallel use of the same driver) has been removed since then.

I had never used optical drives that much, but the "mutexiosis" doesn't apply to sata disks, not to the same effect anyway.

However, by reading the corresponding email, https://lkml.org/lkml/2010/9/14/339, I get that some form of lock already existed and it was simply changed to mutex lock, and that should not have changed the behavior, like it seem it happened.
I'm wondering if it was an unwanted side effect, or some unexpected interaction between components, or something else entirely. :?:
scdbackup
Posts: 20
Joined: Thu Sep 21, 2017 9:14 am

Re: xorriso in ghostbsd-build

Post by scdbackup »

Hi,

ASX wrote:
I also hope that the "per driver lock" (sound very weird just by its name,
because implicitly exclude parallel use of the same driver) has been
removed since then.
Nothing happened since the introduction of the global mutex of driver "sr"
in 2010. The big injustice is that only SCSI passthrough via "sr" is slowed
down, but not the block device read/write operations via "sr". So it looks
like a fault of the clueless burn program rather than of the kernel.
I get that some form of lock already existed and it was simply changed
to mutex lock,
The semantics of a BKL and a mutex are very different, obviously.
According to https://kernelnewbies.org/BigKernelLock the "lock" was
suspended when the process went to sleep voluntarily, and resumed when
the process woke up.

Not only does the current "sr" mutex serialize the calls of ioctl(SG_IO)
to all "sr" device files, but the driver is also taking unnecessary extra
sleeps. The combined throughput on two burner drives is much less than the
throughput on a single drive.

I am using "sg" since 17 months now. It never had a Big Kernel Lock around
its ioctl(SG_IO) and so got no mutex dropped on it. Only problem is that
"sg" is not usable for POSIX i/o and thus not coordinated with mounting.

Everything indicates that the BKL was always unnecessary for that part
of "sr". Other users removed the mutex code from "sr" or changed it to
one mutex per device file. Only reported problem was with 4 IDE drives
at once, where it is not clear whether this really worked before the
BKL removal.
I'm wondering if it was an unwanted side effect, or some unexpected
interaction between components, or something else entirely. :?:
First courage, then negligence, meanwhile arrogance.

I want my old kernel 2.6 back ! With sufficient patience after automatic
drive tray loading, with manually ejectable tray after reading via the
block device layer, and with simultaneous SG_IO on all /dev/sr*.

And maybe a fix for the TAO CD read ahead bug which always was and
probably always will be. It has an own rumor as kernel code comment.
https://www.spinics.net/lists/linux-scsi/msg94638.html

And of course i want it on my Xeon driven i/o monster. Not on that old 2 core
system of which the southbridge radiator fell off after 8 years of service.

"A man can dream though. A man can dream." - Hubert Farnsworth.

Have a nice day :)

Thomas
Post Reply