Feeds:
Posts
Comments

Posts Tagged ‘testing’

The first stage worked flawlessly but the staging malfunctioned. I wasn’t watching, but went to bed at 4 am local time after waiting through some launch delays, thinking that it’d take forever anyway, and of course the launch was right after.

What’s always been a mystery to me, if SpaceX is selling their Falcon 1 rocket at just about eight million dollars, why don’t they do a lot more test flights with dummy payloads? It seems that it would accelerate development a lot, since the hundreds of workers can’t be cheap to just keep working – a delay of a few months in selling some rocket flights will surely be costly.

During the webcast run-up to the launch, they showed Elon Musk doing a tour of their factory, and everything in their production processes seemed well automated and thought out, since they have to do so many engines for Falcon 9 anyway. Just to mention the totally automated copper milling for the chamber and nozzle root liner, as well as the automatic pipe bending machine for the nozzle. 80% of the hardware of Falcon 1 is produced in house from bare metal.

On the other hand, the Merlin 1 main engine has gone through many changes, with power increases (don’t remember if the turbomachinery or injectors have changed) and the regenerative nozzle at least. That means early flight testing could not have been very representative of the design or the build or integration processes. Now they seem further along in that, with the Falcon 9 first stage recently having done a full-up hold down firing of its nine engines.

Design vs Test

There’s something fundamental about the whole issue of designing vs testing. It’s not a totally simple picture, with the current advanced computation and simulation capabilities making the boundary fuzzy. And there has always been partial hardware non-destructive testing too, like structural test models. So, can expensive destructive test flights be seen as just an extension of finding a workable design, as well as production, integration and operation processes? In that sense they all can be pooled into one, as just means of getting to some combination of capability, cost and time goals.

Even if there are no rigid mental boundaries between development and testing, one still has to make more careful judgements before doing a very expensive destructive flight test vs running a few minute configuration simulation. Of course you have to be careful with time allocation in design too, conceptual design is one tool for that, to avoid spending huge amounts of time and thus money for elaborate dead-end designs and configurations. The previous post is about that, where NASA spent a lot of design time for launchers and components that ended up too small anyway. But actually they started from very far and little assumptions in ESAS, eliminating lots of fundamental concepts in a tree analysis, pictured below.

ESAS conceptual launch vehicle design

ESAS conceptual launch vehicle design

Conceptual design is from the top down, but real hardware testing is from the bottom up. Both are needed to work in a real capability. Armadillo Aerospace’s John Carmack has mentioned innumerable times how building working hardware always discards too ambitious and overcomplicated designs (I have to shamefully admit, I have very little experience in designing built hardware, though I am in the process of changing that). If you only do conceptual design without basing anything in real hardware, you are moving on very thin ice. The NASP program was quite a good example of that.

Armadillo Aerospace's Module in tethered hover testing

Armadillo Aerospace's Module in tethered hover testing

In a sense, Falcon 1 is the hardware and process test platform for the real rocket, Falcon 9. Now that SpaceX seems to have their production line ready, I hope they would just do Falcon 1 test launches in rapid succession to iron out their bugs. (Of course, first do a lot of nondestructive ground testing, like for the pyrobolts this time.) Hopefully without charging money from the payload customers.

Again, when looking back to Wernher von Braun, after V-2 he proposed in USA the development of some new rockets and humongous amounts of test launches were included in the plan. Hardware got more reliable and rockets bigger and more complicated and thus expensive, which eliminated this approach, but it is an interesting historical mindset and viewpoint.

And again, too, of course, the fundamental property of expendable rockets that every flight ends in destruction, prevents economic partial testing. You can’t do careful envelope expansion or survive flight anomalies. Reusable flight vehicles on the other hand allow a lot of flight testing.

Read Full Post »

Well, as has been noted in the newspace circles, Armadillo Aerospace failed to win the lunar lander challenge even after coming very close in 2006 already, and being even closer this time. Their report text is here and videos and pictures here (highly recommended). Other teams failed even to participate.

Everybody was cheering for Armadillo. The dream carries on. More testing, more steady state solutions. One part of Armadillo’s problems has been that they have to drive quite a distance to their test site, which limits testing a lot. Teams like Masten Space Systems have it easier in Mojave, but they had some tank supplier problems as well as some things with the control algorithms, and haven’t done any info updates for over a month since they started trying to do hover tests. (Wink, wink. πŸ™‚ )

Acuity and Paragon were teams that didn’t give much info, so it was hard to judge where they were, the same applied to BonNova. Speedup gave some info as well as Micro-Space. Unreasonable Rocket‘s Paul Breed was very informative in his blog and I laud his efforts.

One thing which some parallels can be drawn to is the DARPA Grand Challenge, which promised prizes to teams constructing an autonomous vehicle that can drive from Los Angeles to Las Vegas via a marked desert route. The first year was a failure, with most of the teams failing to even qualify for start. (There was a short obstacle course test.) But the next year, the prize was won and many vehicles finished. It seems that either the Lunar Lander Challenge is harder or then people are not willing to put similar broad resources behind it. It may be both. In 2007 the DARPA Grand Challenge has moved to a new urban setting.

There also exists a different comparison. The X-Prize, which was won by Scaled Composites in 2004. Other teams didn’t come even close or even make that much progress and almost all disappeared quickly after the victory. Scaled had a big money backer, Paul Allen, and worked long and hard to do it. A very different picture from the Grand Challenge contest, as well as a different style of doing it. One worked with yearly races with increasing prize money, the other was an absolute deadline. And of course the former was probably much easier than the latter.

There are many ways Armadillo’s failures can be analyzed. You can look at subsystems: this year their problem was the engine (you can read the details in their report mentioned above). Last year it was the landing gear as well as a badly surveyed track.

But the engine problems had some story behind them: either the air was more humid or the altitude was higher than on their own test site. Or the ethanol composition was different which caused clogging of the injector, which caused grief later. Or they hadn’t run back to back flights so close after each other with the engine before. This all speaks of how the small details are important. Jon Goff (who inhabits my blogroll) has said how they at Masten Space had the first engine hard start problems only on the 30th 300th test run. You need lots of testing to make designs reliable (or then somebody invents a foolproof way to do a start). Some people have also proposed that since the development of new reliable rocket hardware takes significant time and effort, it would be easier to buy ready made engines from subcontractors. But it seems this has not caught on, the lunar lander challenge outfits are so poor that they can’t afford this and have to develop their own stuff. Also you lose intellectual capital and technological lead distance by selling your engine design. And lastly, people saw what happened to Masten when they subcontracted their control algorithm (or at least assume until they do a development update). It’s a bit harder to troubleshoot when you didn’t make it yourself. Although the decision still might have sped up development considerably.

It’s an interesting juncture. If there is no more than one LLC competitor next year or even if there are but the prize still isn’t won, would a rethought approach to the problem be better? On the other hand, with a tiny bit more luck, Armadillo could have taken the money home already in 2006, and people would have made other far reaching conclusions about these things. It’s not wise to try to make up your mind from too little data and proclaim somethind far reaching. So I’ll make a prediction, like last year: next year Armadillo will finish the challenge and there will be other competitors too.

Of course, if someone wanted to enter the suborbital business, they could hire me as a consultant and I’d say which parts to buy from where to combine into a perfect vehicle. πŸ˜‰

Edit 2007/11/5 : Corrected Masten’s 30 test runs before the hard start to 300, per Jon Goff’s comments. πŸ™‚Β Β He also says my guesses about their control system are not entirely right.

Read Full Post »