The debian-private mailing list leak, part 1. Volunteers have complained about Blackmail. Lynchings. Character assassination. Defamation. Cyberbullying. Volunteers who gave many years of their lives are picked out at random for cruel social experiments. The former DPL's girlfriend Molly de Blanc is given volunteers to experiment on for her crazy talks. These volunteers never consented to be used like lab rats. We don't either. debian-private can no longer be a safe space for the cabal. Let these monsters have nowhere to hide. Volunteers are not disposable. We stand with the victims.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Release of 1.3.2?



>   Guy> I'm in favor of much more rapid revisions.  
> 
> Seconded.  
> 
> Rapid, and transparent, revisions are actually an advantage.  People just
> have to understand that it is better for them to get a CD covering several
> hundred MB _and_ being able to semiautomatically upgrade the few newer files
> from a Website. Anytime they want. They can always be current. Even a book
> vendor could press that on the title.


Just so everybody thinks about this a bit more - I'll disagree (sortof).

Point releases should just be for security fixes and critical bugs - and
nothing more.  They should be small, and infrequent.

The whole Linux development model (see Eric Raymond's "The Cathedral
and the Bazaar") is an extremely effective model for rapidly evolving
or growing a system.  The effect of this is that there are relatively
few really serious bugs, but lots of little ones.  Unfortunately,
small bugs tend to interact in any system - so their effect becomes
quite magnified.  

This is why our 'unstable' distribution, which changes daily, lives up 
to it's name.  Every once in a while, something gets upgraded, which 
affects something else -- and WHAMMO! -- you're system gets flattened.
That's OK for us, since we're developers and we can take it.  Plus it's
a part of the whole debugging process.

Since we don't want to expose our users to that (or our own production
systems) - we have a 'stable' distribution.  We spend months testing
it to make sure there aren't any major glitches in it.  The result
is that we get the best of both worlds - few really serious bugs +
most of the small systematic ones are worked out too.

Fred Brooks, in the "Mythical Man-Month" writes:

  Lehman and Belady offer evidence that quanta should be very large
  and widely spaced or else very small and frequent.  The latter
  strategy is more subject to instability, according to their model.
  My experience confirms it:  I would never risk that strategy in
  practice.

Well, he wrote that in 1975, but I think it applies very well to the
Linux model of development.  If you run the 'unstable' distribution,
(or even worse, are using the latest 2.1.x kernels) -- the "quanta"
(releases) will be never small and frequent.  This tends to maximize
the evolutionary growth of the software.  However, the developers
will pay the price in terms of instability of their development
platform.

On the other hand, the 'stable' distribution has "quanta" that are
large and widely spaced.  We only make a major release every 4 to
6 months. Each release is reasonably well tested and known to work 
together - so it is in fact stable.  This is a good thing.  :-)

It's important to realize that we are still quite a small organization,
and we don't have that many actual testers.  Most of the bug reports
are generated by the developers (200 of us) most of whom are running
the unstable distribution (with over 1000 packages).  The packages
that are introduced into the 'stable' minor releases do not have to run
the gauntlet of testing that happens to packages that were introduced
into 'unstable' first, and only became a part of 'stable' at the time of
a major release. 

A package that is tested in 'unstable' isn't being tested against the 
same packages that it would be living with when it is introduced into 
'stable' via a minor 1.3.x release.  There are probably very few
people running (and testing) 'bo + bo-updates'.  So we should avoid 
putting things into bo-updates as much as possible.

So I think it's best to expose the users of 'stable' to as few
minor releases as possible (with very few packages in each).  We should
save our "good stuff" for the major releases.  Of course, security
fixes are a must - those can't wait for a major release.  And there
may be some packages that are so broken that it's better to fix them.

How about this - let's do releases like this:

 1.3      - major release
 1.3.x    - minor releases with critical security bugs fixed
             (stable users should upgrade)
 1.3.x.y  - minor releases with minor bug fixes and adjustments
             (primarily to fix up screw-ups in minor releases)

Of course, we can't control when critical security bugs will pop up,
but I'd like to see the number of 1.3.x releases limited to 3-4 between
major releases.  We should avoid 1.3.x.y releases - except when we
screw-up and making another minor release would be too embarrassing.

Adding xfree86 3.3 into the release to fix security bugs was certainly
worthy of a 1.3.1 release.  It was a big change.  There is now so much
stuff in bo-updates that it is probably worth making a 1.3.2 release.
Frankly, it looks like there's an awful lot more there than necessary
- but that might be a function of how we did our testing prior to the
1.3 major release.  (I've got some simple ideas that could really
help us out next time)

We should space the minor releases (1.3.x) far enough apart so that
any one time, most of the stable users have a clue what the current
version is.  

Minor version numbers shouldn't be too much of a problem to CD vendors 
and book sellers - the actual difference between Linux 2.0.23 and 
Linux 2.0.30 isn't that great - so they would just refer to it as Linux 2.0 
or Linux 2.0.x.  The same thing would probably work for them when they
are selling Debian CD's.

(my apologies for the length of this diatribe)

Cheers,

 - Jim




Attachment: pgpzEeb6Q6pP1.pgp
Description: PGP signature