Thanks to Theo I have the blog at gentoo working again =)
In the past time I used my company blog as fallback.
Now I’m reading this and I wonder if we, as distribution did help or not to maintain things as lean as possible giving enough feedback to upstream and proposing tools that work fine.
Diego is probably doing his best even if from time he gets frustrated since things might not move as fast as should.
I do hope we won’t end up unleashing strange beasts like systemd and upstart (those are missing from that abstract but for me those are the most immature technologies around nowadays) or equivalents before they behave.
Ok, we got a new council, I’m still there so thank you for renewing the trust on me =)
Looks like that less people found me or what I did that compelling to make me into the council, so surely I did something wrong. Solar got the first place so his cleanly cut ways are perceived better.
I started polling people about what they feel about Gentoo and what they’d like. The first thing I noticed is that people are sick of endless discussions on marginal stuff and even more sick of outside projects trying to push it’s agenda on Gentoo using the shovel-in-throat way.
Second item is about trying to make the place nicer for everybody and better involve our large userbase. We used to be the nicest distribution regarding attitude towards newcomers and slow learner, now other distributions are better. We could re-learn from them.
That’s what I perceived so far. As I said before I see the council just as the last resort to get something decided if we, developers, cannot find a large agreement. Solar likes more to be proactive in my opinion. You liked him so I guess we as council should try to push people express themselves and get new&interesting stuff done instead of discussing which is the new way to define a quantity next to infinity or why embedding information somewhere is right or wrong in theory.
That said, how wrong I am so far and how we could get Gentoo to improve even more?
There are many discussions about how Theora should be used and about how it smokes x264 somehow.
I do not believe it or at least I don’t believe the proofs till I try myself.
Any of the Theora zealots reading could please provide a reproducible benchmark so everybody could see for themselves how good/bad Theora is?
A script that fetches the new theora encoder, ffmpeg, takes an original, produces two videos using theora and h264 (no audio), same bitrate for both and in the end outputs cpu and memory usage would be great.
Last week I spend some time visiting some friends in Massa, a quite nice city near a quite nice seaside. It’s all fine with sand, sea and the usual places where to relax. We went in a crepe and piadina shop, there we found the owner pretty depressed: he is thinking about closing the shop by the summer.
Though times you think, well no, Creppolo is quite known and lots of people gather there by the day and by the night, if you want to eat something tasty during the night and you are around there you’ll probably end up there… So, why closing? Well apparently the municipality council/major doesn’t like to have shops open the whole night (even if it’s a tourist city in the seaside…) and decided that by 2AM every shop MUST be closed. Creppolo made most of its sells between midnight and 4AM.
That is pretty annoying since I liked that place and pretty annoying for all the other shops that sell stuff at night (there are many). I always appreciate the short-sightedness of the people elected by people.
Today it’s an election day for the Europe, please make sure you don’t miss this chance to vote somebody that won’t spoil your hard work ^^
That’s a follow up from Diego about somebody quite angry that went to vent a bit too much and got quite a backslash. Then he got also some issues with his blogging software hiding all the comments about his rant. Now things are more or less normal and you can read the comments again.
Given I’m the first using strong words about software I dislike (like cmake) I try to be ready to be proven wrong. Since that usually means that that either I got somebody planting some clues on me with the proper bat or things improve in the mean time, both cases usually I’m happy/happier to be proven wrong. Being wrong is something that may happen.
Now, I hope that what Matti and Boudewijn did was out of frustration, since I assume that after stupidity and before malice. If you release something it’s YOUR FAULT if it’s broken since YOU are the committer in your tree. Smearing other people to cover the fact YOU, too, messed up is plainly wrong…
Recently wesnoth got released and there is already an ebuild in portage.
Since upstream stated that autotools were being deprecated (actually some people come up to avoid that in the end) Mr_Bones crafted an ebuild using cmake.
Here some values:
build time for cmake -> 3.20m (take everything with at least some of variance)
build time for wesnoth using cmake -> 8.15m (again some more, some less depending on the runs)
build time for wesnoth using autotools -> 8.00m (again some less, some more depending on the runs)
my method is quite simple:
– first I fetch the sources and then build some times cmake and wesnoth 1.6a and use time to see how much it takes.
– then I use the older ebuild 1.4.7, remove the no-python from src_prepare since its unneeded, I call it 1.6a-r1 and then build some times that one as well.
Apparently cmake nearly add about 1/3 to the actual build time if you don’t have it already installed (and you shouldn’t) and compared to the autotools system it adds some time by itself.
In short people shouldn’t use cmake if autotools are available.
Here a bullet list about doing benchmarks
- Reproducibility: your numbers worth nothing if nobody could reproduce them, so you have to give along them a script or a detailed description of what you did.
- Statistics: outliers and other artefacts may screw your results, make sure your script is run enough times
- Indices and values: if you want to proof something you need hard numbers possibly something everybody can understand easily and cannot be misunderstood: cpu cycles, time to complete, memory usage, are quite good, given you aren’t testing on particularly different architectures or systems.
Quite short, isn’t it? Still many people just state their values by inference (like “it ought to do 2x the syscalls thus should be twice slow”), or just tries to benchmark something that isn’t what you want to test (like “glxgears is slower now, mesa is slower at rendering complex scenes”) or have a quite different settings and configurations (think about having your application with a minimal configuration and the same one with a larger one)
Usually the best way to get a meaningful benchmark is preparing a script and give instructions about how to use (like which version of companion software are you using) and then provide the numbers (media with variance if you are inclined) and a summary of the system, this way others can play and try themselves. This is quite useful since the optimization you are working on may be great on gcc4.3 on PowerPC but be problematic on x86 with gcc-2.95.
Other times you want just people to compare something that is _quite_ influenced by the surrounding system or is annoying to setup, in those cases having a full system image is quite a boon, _everything_ could be the same. And given how easy is to use virtualization/emulation software nowadays it just take a bit of bandwidth.