Sory, KL-666, it was fairly late and I was tired. I understand now
Thank you for the advice
HJ1an, OK, thanks, I see what you mean.
A320 safety record.
Re: A320 safety record.
The problem is broader than only extremely different flight behaviour in different laws.
Nowadays we see many late lift offs, even taking the ils antennae at the other end of the runway with them. This is due to wrong inputs in the flight computer for reduced thrust take off. Fine if you make a mistake on that. But why the hell do pilots not intervene when they start crossing the stripes at the other end of the runway at 80 kts? That has to do something with training and standard ops. A person needs experience to recognize that something is wrong. From always trusting the automation you do not get any experience. You need to have done it manual regularly. But with an airline who even forbids you to turn off automation in sim sessions, you will not get much experience, will you?
The chain of knowledge and experience goes: Pilots get their knowledge from their airline, airlines get their knowledge from the manufacturers. If there is a manufacturer who says: In the near future we can program all the problems away, and in the mean time beat into your pilots that they should put in the numbers right, then many airlines will believe that, and convey that to their pilots.
But i do not want pilots who nervously try to type in everything right. It is inevitable that some day some pilots will do it wrong. Actually i do not mind that, mistakes are human. What i do mind is that pilots do not have the awareness to intervene when necessary.
For that you need experience, which other manufacturers propagate. Those manufacturers say clearly: We are not going to change things, because it will never be water tight. You airline better give your pilots more experience.
The philosophy of a manufacturer has great influence on how airlines perceive reality. And they in turn influence their pilots. I strongly happen to think that Airbus, being of manufacturer type 1 (automation believer), is lulling airlines into a false belief of security.
Kind regards, Vincent
Nowadays we see many late lift offs, even taking the ils antennae at the other end of the runway with them. This is due to wrong inputs in the flight computer for reduced thrust take off. Fine if you make a mistake on that. But why the hell do pilots not intervene when they start crossing the stripes at the other end of the runway at 80 kts? That has to do something with training and standard ops. A person needs experience to recognize that something is wrong. From always trusting the automation you do not get any experience. You need to have done it manual regularly. But with an airline who even forbids you to turn off automation in sim sessions, you will not get much experience, will you?
The chain of knowledge and experience goes: Pilots get their knowledge from their airline, airlines get their knowledge from the manufacturers. If there is a manufacturer who says: In the near future we can program all the problems away, and in the mean time beat into your pilots that they should put in the numbers right, then many airlines will believe that, and convey that to their pilots.
But i do not want pilots who nervously try to type in everything right. It is inevitable that some day some pilots will do it wrong. Actually i do not mind that, mistakes are human. What i do mind is that pilots do not have the awareness to intervene when necessary.
For that you need experience, which other manufacturers propagate. Those manufacturers say clearly: We are not going to change things, because it will never be water tight. You airline better give your pilots more experience.
The philosophy of a manufacturer has great influence on how airlines perceive reality. And they in turn influence their pilots. I strongly happen to think that Airbus, being of manufacturer type 1 (automation believer), is lulling airlines into a false belief of security.
Kind regards, Vincent
Re: A320 safety record.
There is another thing to consider. In mathematics, it is a proven fact, that there is no way to actually prove the absence of failures. Even worse, since you can't do that, you can't really say beforehand if a failure will show up as glitch (a typo in an message for example), or something potentially catastrophic (like both engines shutting down in approach).
There is a second problem, it's basically the probability of failure dispersed over situations. Software deals good with standard situations because the developers have all the data because, well, it's standard, it happens all the time. Problems occur when something happens in a dynamic real life situation. Here is an example from a total different rl situation: I had once to do with software for MRIs and CTs. Those things have those snake-tables to drive the patients through the tube. Now, everybody of course was thinking so far, to provide sensors and stop the motion is a patient is too heavy. But, if the patients was just not too heavy by half a pound, the table moved and still the diameter of his belly could be big enough to get in touch with the rotating tube. So, they thought about normal people, they thought even about oversized people. What they missed were oversized people with less weight than they expected.
So, transferring this experience to aircrafts: The software in the plane will do fine in most standard situations, but planes move all around the world and will encounter here and there situations, not foreseen when the software was designed. What happens then is a pure lucky game.
The third problem has to do with perceptions. Computers don't "see" things and even less, they "perceive" things. What they do is measuring thing and analyze them in a lot of loops and ifs, basically comparing the input against data they have, be it form of lists or databases, it's doesn't matter. Autolanding with ILS is a fine thing, only, if the frequencies get re-organized for example, the ILS frequency of a runway doesn't fit the one in the database anymore till the database is updated. The computer can't recognize, something is fishy. He can't press a talk button and ask the ATC. For the computer, this runway has no active ILS and the software needs additional human input. Actually, what the computer needs now is an arm with a fat hammer on it to whack a pilot of the Sumthing-Wrung style over the head till he tells him the new frequency or lands that damn bird manually. But that would be deeply inhuman to do, so the computer is limited to messages on screens and if those untrained clowns don't read those messages or don't understand them, what will happen?
There is a second problem, it's basically the probability of failure dispersed over situations. Software deals good with standard situations because the developers have all the data because, well, it's standard, it happens all the time. Problems occur when something happens in a dynamic real life situation. Here is an example from a total different rl situation: I had once to do with software for MRIs and CTs. Those things have those snake-tables to drive the patients through the tube. Now, everybody of course was thinking so far, to provide sensors and stop the motion is a patient is too heavy. But, if the patients was just not too heavy by half a pound, the table moved and still the diameter of his belly could be big enough to get in touch with the rotating tube. So, they thought about normal people, they thought even about oversized people. What they missed were oversized people with less weight than they expected.
So, transferring this experience to aircrafts: The software in the plane will do fine in most standard situations, but planes move all around the world and will encounter here and there situations, not foreseen when the software was designed. What happens then is a pure lucky game.
The third problem has to do with perceptions. Computers don't "see" things and even less, they "perceive" things. What they do is measuring thing and analyze them in a lot of loops and ifs, basically comparing the input against data they have, be it form of lists or databases, it's doesn't matter. Autolanding with ILS is a fine thing, only, if the frequencies get re-organized for example, the ILS frequency of a runway doesn't fit the one in the database anymore till the database is updated. The computer can't recognize, something is fishy. He can't press a talk button and ask the ATC. For the computer, this runway has no active ILS and the software needs additional human input. Actually, what the computer needs now is an arm with a fat hammer on it to whack a pilot of the Sumthing-Wrung style over the head till he tells him the new frequency or lands that damn bird manually. But that would be deeply inhuman to do, so the computer is limited to messages on screens and if those untrained clowns don't read those messages or don't understand them, what will happen?
Free speech can never be achieved by dictatorial measures!
Who is online
Users browsing this forum: No registered users and 7 guests