Posts Tagged ‘QPP’

QPP is now officially a CMake based project

May 21, 2009

So, I’m happy enough with how CMake has worked out that I’ve merged the branch into the main line. I’m still a little annoyed with CPack recursively bringing generated archives into the fold, but I’ve managed to work around that so all is good. I’ll see about getting that put into their bug archives shortly.

Now all I have to do is get QPP to the point of a release!

Latest edition of MeasureIT is available

May 20, 2009

MeasureIT is a monthly publication of the Computer Measurement Group (CMG). If you’re into capacity planning, or feel it may be useful for you, this can be an invaluable resource. It has articles useful to both new capacity planners and people who’ve been doing it for quite a while.

It’s also freely available for non-members, although if you find it useful, then you probably need full membership. This includes other technical papers, including access to their annual conference proceedings.

And no, I have no financial interest in promoting them. But I am a member.

CMake as promised?

May 19, 2009

So, a few false starts, but with help from the kind people at kitware, specifically Bill Hoffman, it appears I’m at a point I want to be. The key is to use a separate build directory from the source directory. I’m now happily building, and with the first stages of an RPM package up and going.

I never expected tech support by blog. This is a first for me.

So I’m not done, but the pieces are in place. I have a lot of work to do before an initial release, but at least I’m now happy with the road I’m on. Life is good.

I still think the CMake project needs work. Some of the things I’ve worked on could be handled by the software, such as requiring out of directory builds, or could be better documented, such as the building of Windows DLL’s (not entirely their problem, but one their users are likely to face). But at this point, I’m ready to commit for the QPP project, and may consider migrating other projects in future. My key criteria for migration is that it requires a Windows build, but that’s just because I’m lazy.

More CMake oddities…

May 11, 2009

The story continues…

I’ve got the thing working, and am now working on packaging. This is where things get far messier than they should.

First off, the default source generators are for tar.gz, tar.bz2, and tar.Z. More than I need, but OK. Messing with it means I need to mess with it for ALL target platforms, and that seems to defeat the purpose. Where things go seriously awry is when the second generator runs, in that it also hauls in the first archive, and the third hauls in the first two. Even more seriously, they haul in their working directory with FULL copies and all temporary files. Run this more than once, and you quickly run out of disk space. And of course, CMake provides no functionality for removing these old temp files.

Yes, you can work around these. But why? This is some pretty basic stuff! My ~11000 lines of code quickly turned into an archive over 800 MB! That’s too big for a CD!

The second issue is when I try to build a RPM from a source file. This is a process that needs to be non-interactive, but that’s not how CMake is designed to run. The Fedora guidelines for create a .spec file using CMake don’t work. The primary problem is CMake just wont work if your current directory is different from the one used to make the archive, which it is pretty much guaranteed to be. Defining the appropriate variables on the command line doesn’t seem to override the cache conflicts. The workaround seems to be to create a script file to do this, but I really should be able to all that from the command line.

The more I play with CMake the more I’m convinced that it could be a good system some day, but it’s not there yet. I’ll continue to use it for small projects that might require a windows build, but there’s no way I’m ready to convert my larger packages. CMake is far from ready for that. They seem to have tackled some of the larger autoconf/automake issues, but have ignored so much basic stuff along the way.

CMake – an almost fulfilled promise

May 10, 2009

OK, so I said it was on the back burner. It was, for a few hours. I hate to be beaten though, so I went back at it.

As always, when changing systems, thinking has to change with it. There were a few things I had to modify, such as the install directive for the library. Turns out it also needs an ARCHIVE destination even when building shared libraries. The main one is a major “Duh!” of dealing with shared libraries. Here’s an excellent site that can explain it far better than I would: http://www.vtk.org/Wiki/BuildingWinDLL.

So now everything builds fine and links fine. The next step is registering the DLL so I can use it!

I still think CMake is going to be a challenge for some of my existing projects, but I’m liking it more and more. The documentation needs work though (says the guy who hates to document).

CMake – an unfulfilled promise

May 10, 2009

As I progress with the development of a C++ queuing model library, the prospect of creating yet another build environment for compiling on Windows loomed its ugly head. CMake promises to make these cross-platform builds doable, and as the build system for the library is still simple, I thought I’d give it a try.

The first step was on the Linux side where all works fine. It took a little doing, but not too much. The only real hiccup I had was in figuring out how to set the library version numbers. All in all with a bit of web trawling, it took me only about 4 hours from scratch to a system that matched my current automake/autoconf system. Granted, it’s a simple configuration, but that’s a lot to replace. I was impressed.

Moving it to Windows (Windows XP with Visual Studio Express 2009) was less successful. I need platform dependent directives for my test program link options. The target directive had to be removed in its entirety due to a lack of destination directive even though it’s clearly there. I haven’t tested how that will work on Linux, but I’m sure its not good. There’s also the problem of the test files. I built a shared library, and the tests link against this. Except they don’t. They want a static library. So I try to build both. It doesn’t. My square peg just doesn’t want to fit in this round hole.

Now, as I’ve already mentioned, I haven’t spent much time on this yet. I’ll continue, but for now, it’s on the back burner. Sigh.

QPP Development Continues…

April 16, 2009

Well, my initial queuing model library implementation met my immediate needs, but I need to make some changes to make it useful in the long term. The initial design was based on PerfDynamics PDQ library, but it has some serious limitations. The most important one is its limited support of multi-class and multi-server queues. So in the spirit of open source development, I’m stealing from other programs.

The Java Modeling Tools (JMT) project has a good set of routines implemented in Java. I’m in the process of porting these to Java and shoe-horning them into my API. It should be possible to incorporate all the new functionality without changing the API.

And of course, the final results will be GPL.

This is a pretty exciting project on a number of levels. Aside from the obvious capabilities provided by the library, it’s a pretty interesting project both in terms of porting and in optimization. Remember that I’m tying these routines into a genetic algorithm, so they will be called thousands of times. Efficiencies are far more important to me than they were to the initial implementers.

The saga continues…