DESIGN OF AN INTELLIGENT STRUCTURAL QUALIFICATION ENVIRONMENT USING MSC/PATRAN
N.J. Dullaway, A.J. Morris
Structures & Materials Group, College of Aeronautics, Cranfield University, England.
ABSTRACT
Increasingly, modern structures are becoming ever more complex, large and expensive particularly when large full scale or similar qualification tests are required. This is particularly true when the structure being designed is safety-critical. In addition, questions are now being asked about the ability of conventional test practices to adequately qualify and validate new structures. This is a situation which is causing concern in a number of industrial domains including aerospace, maritime and civil engineering.
The arrival of computer-based analysis, particularly finite element analysis, has provided the ability to reduce reliance on conventional, or "real", testing and instead go down the path of "virtual" testing. However, virtual testing raises the question of the reliability of analysis and the possibility that the use of poor procedures in the analysis process may produce results that at best are meaningless and at worst are extremely dangerous.
This paper describes SAFESA™ (SAFE Structural Analysis), a research project to develop a computer-aided engineering environment for automated structural qualification in a range of domains by means of virtual testing. This application is built on the MSC/PATRAN & MSC/NASTRAN platform and implemented using PCL as the development language.
INTRODUCTION
In the context of this paper virtual testing is defined as qualification by means of analysing a mathematical model of the subject structure; Physical testing is the more traditional method of qualification whereby an actual example of the item is subjected to various environmental conditions and its behaviour measured. Common examples of contemporary tools available for virtual testing purposes are Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD) and Computational Electromagnetics (CEM). One of the key advantages of virtual testing is that it enables the design to be evaluated and validated before manufacture thus promoting the use of concurrent engineering methods as well as reducing the design-to-development costs.
Test Paradigms
Fig.1
A common worry of those who encounter virtual testing for the first time is ‘accuracy’. There often exists a belief that a laboratory test programme (including, in the aerospace industry, flighttesting) of a physically existent prototype or pre-production sample is inherently more reliable than analysis conducted on a virtual model that exists only within a computer memory. This belief is not always justifiable. Fig.1 compares the two paradigms of test and analysis. The InService Structure is the entity that leaves the factory as a product for use by a customer. The ‘Real World’ is the representation (usually a prototype) of the product that is to be tested. The Model is the computer-based representation of the in-service structure. A series of tasks are carried out on each representation and responses are generated.
A series of laboratory tests cannot be expected to cover all forms of behaviour that will be required of the in-service structure by the customer; only to ensure that safety-regulations are satisfied. In the case of physical testing it is often difficult to replicate the support conditions experienced by the real world structure and obtaining a realistic and comprehensive set of test loads always poses serious problems. Consequently there are a number of (almost always trivial) differences between the responses of in-service structure and representation.
While contemporary FEA software can generate responses to 32-bit detail, there is almost certain to be some level of idealisation between the in-service structure and the model. Consequently there are a number of differences between the responses of in-service structure and representation.
The conclusion is that a ‘Real World’ test sample is just as much a model as a computer-based virtual model. Neither can produce fully accurate responses, but both are capable of producing responses that are within the bounds of allowable error. These, and other factors, are often overlooked and the very fact that a physical structure is being tested is taken as a guarantee that the results obtained do model the real-world environment of the designed structure. In addition, if one then considers that a particular example of an object will fail in a different way from others within a given production run because of microscopic inconsistencies in its structure, such as cracks, then it is very difficult to say whether that one example of a production run is able to represent the batch ‘better’ than a model created from the design. Therefore, virtual testing should not be dismissed as a testing strategy on the grounds of accuracy alone.
In any form of testing the manner by which the data was obtained (the method) is often more vital to a reliable test than the data itself. Once it is accepted that the concept of virtual testing is not inherently less reliable than physical testing, the problem of ensuring that the method itself is reliable becomes paramount.
SAFESA
INTRODUCTION
In recent years there has been a trend towards ever-larger and more complex structures and engineering applications - so large in fact that physical testing techniques are no longer adequate. At the same time computer power has increased to the extent that these same large structures can be modelled without difficulty. Add to this the trend for the large structures to be increasingly in the safety-critical domain and there then exists a need for a formalised and reliable virtual testing procedure.
SAFESA Paradigm
Fig.2
SAFESA is one of a number of projects sponsored by the UK Government’s Department of Trade & Industry (DTI) as part of its Safety-Critical Systems Initiative. The aim of the project is to enable structural qualification to be carried out reliably and accurately using the FEA method in safety-critical situations by a ‘Best Practice’[1]. The philosophy of SAFESA is that of errormanagement; errors in the virtual testing process are identified, classified and treated. Fig.2 shows the SAFESA paradigm in simple terms. Firstly, the in-service structure is defined in terms of the loading environment, the response environment, the certification or qualification requirements, etc. Secondly, an idealised model is generated from the in-service structure which can be used to generate a finite element model, whilst acknowledging that this idealisation is a possible source of error. Thirdly, the finite element model is used to produce a set of responses that can be used to qualify the in-service structure. More errors are generated at this stage. The SAFESA process is used to analyse the errors at all stages of virtual testing such that the inservice structure can be qualified with confidence.
SAFESA was developed with FEA in mind although the method is portable to other testing procedures. The drivers for the project included:
1. The trend to reduce physical testing by virtual testing.
2. The reduction of costs via reduced design cycle time and the promotion of concurrency.
3. The ability to provide full transparent auditing.
4. The improved legal position provided by the audit trail.
ERROR TREATMENT
FEA is a process which attempts to make certain generalisations or assumptions of the real world when constructing a model, sources of which have the effect of introducing errors into the analysis process. This does not invalidate any results obtained from a finite element analysis provided a proper error control system is used. The general procedure for error control is as follows:
1. Identification and classification of the error.
2. Quantification of the error.
3. Treatment.
There has been much work done on the process of error classification [2, 3] and a four-level taxonometric system has arisen. The four classes of error are:
1. Modelling, or Idealisation, errors, caused by a lack of knowledge of the real structure and its environment.
2. Procedural errors, due to discretisation meshing and post-processing.
3. Formulation errors, created during the conversion of a model to an actual finite element problem ready for solution.
4. Solution errors, produced during the solution of the Finite Element problem.
These classes can be further broken down. Each constitutes an error source, for which various error treatment techniques are available. The goal of error treatment is to progressively reduce the error estimate to less than a predefined threshold value, as the idealisation process is redefined.
The error treatment techniques are:
1. Rules based on experience,
2. Scoping calculations,
3. Comparison with existing test results, 4. Hierarchical modelling (model improvement),
5. Sensitivity analyses.
The current development phase - the construction of a SAFESA-based expert system - aims to automate both the identification of errors and possible treatment strategies.
Full details of the SAFESA process, a detailed breakdown of each stage, example problems and a more comprehensive discussion of the philosophy have been published in three reference works: the “SAFESA Technical Manual” [4], the “SAFESA Quick Reference Guide” [5] and the “SAFESA Management Guidelines” [6].
SAFESA EXPERT ADVISORY SYSTEM
The initial phase of SAFESA relied on the engineer, who was to perform an analysis by following the SAFESA methodology, to identify sources of error and to flag them for later treatment. Once SAFESA has been defined and published as a ‘Best Practice’ the aim of the project is now to implement SAFESA as a computer-based Expert Advisory System (EAS) that will advise the users of FEA software on the correct approach to take so that the final analysis is valid, with well-defined error-bounds. Such an analysis might be accepted as a good repr
DESIGN OF AN INTELLIGENT STRUCTURAL QUALIFICATION ENVIRONMENT USING MSC/PATRAN
N.J. Dullaway, A.J. Morris
Structures & Materials Group, College of Aeronautics, Cranfield University, England.
ABSTRACT
Increasingly, modern structures are becoming ever more complex, large and expensive particularly when large full scale or similar qualification tests are required. This is particularly true when the structure being designed is safety-critical. In addition, questions are now being asked about the ability of conventional test practices to adequately qualify and validate new structures. This is a situation which is causing concern in a number of industrial domains including aerospace, maritime and civil engineering.
The arrival of computer-based analysis, particularly finite element analysis, has provided the ability to reduce reliance on conventional, or "real", testing and instead go down the path of "virtual" testing. However, virtual testing raises the question of the reliability of analysis and the possibility that the use of poor procedures in the analysis process may produce results that at best are meaningless and at worst are extremely dangerous.
This paper describes SAFESA™ (SAFE Structural Analysis), a research project to develop a computer-aided engineering environment for automated structural qualification in a range of domains by means of virtual testing. This application is built on the MSC/PATRAN & MSC/NASTRAN platform and implemented using PCL as the development language.
INTRODUCTION
In the context of this paper virtual testing is defined as qualification by means of analysing a mathematical model of the subject structure; Physical testing is the more traditional method of qualification whereby an actual example of the item is subjected to various environmental conditions and its behaviour measured. Common examples of contemporary tools available for virtual testing purposes are Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD) and Computational Electromagnetics (CEM). One of the key advantages of virtual testing is that it enables the design to be evaluated and validated before manufacture thus promoting the use of concurrent engineering methods as well as reducing the design-to-development costs.
Test Paradigms
Fig.1
A common worry of those who encounter virtual testing for the first time is ‘accuracy’. There often exists a belief that a laboratory test programme (including, in the aerospace industry, flighttesting) of a physically existent prototype or pre-production sample is inherently more reliable than analysis conducted on a virtual model that exists only within a computer memory. This belief is not always justifiable. Fig.1 compares the two paradigms of test and analysis. The InService Structure is the entity that leaves the factory as a product for use by a customer. The ‘Real World’ is the representation (usually a prototype) of the product that is to be tested. The Model is the computer-based representation of the in-service structure. A series of tasks are carried out on each representation and responses are generated.
A series of laboratory tests cannot be expected to cover all forms of behaviour that will be required of the in-service structure by the customer; only to ensure that safety-regulations are satisfied. In the case of physical testing it is often difficult to replicate the support conditions experienced by the real world structure and obtaining a realistic and comprehensive set of test loads always poses serious problems. Consequently there are a number of (almost always trivial) differences between the responses of in-service structure and representation.
While contemporary FEA software can generate responses to 32-bit detail, there is almost certain to be some level of idealisation between the in-service structure and the model. Consequently there are a number of differences between the responses of in-service structure and representation.
The conclusion is that a ‘Real World’ test sample is just as much a model as a computer-based virtual model. Neither can produce fully accurate responses, but both are capable of producing responses that are within the bounds of allowable error. These, and other factors, are often overlooked and the very fact that a physical structure is being tested is taken as a guarantee that the results obtained do model the real-world environment of the designed structure. In addition, if one then considers that a particular example of an object will fail in a different way from others within a given production run because of microscopic inconsistencies in its structure, such as cracks, then it is very difficult to say whether that one example of a production run is able to represent the batch ‘better’ than a model created from the design. Therefore, virtual testing should not be dismissed as a testing strategy on the grounds of accuracy alone.
In any form of testing the manner by which the data was obtained (the method) is often more vital to a reliable test than the data itself. Once it is accepted that the concept of virtual testing is not inherently less reliable than physical testing, the problem of ensuring that the method itself is reliable becomes paramount.
SAFESA
INTRODUCTION
In recent years there has been a trend towards ever-larger and more complex structures and engineering applications - so large in fact that physical testing techniques are no longer adequate. At the same time computer power has increased to the extent that these same large structures can be modelled without difficulty. Add to this the trend for the large structures to be increasingly in the safety-critical domain and there then exists a need for a formalised and reliable virtual testing procedure.
SAFESA Paradigm
Fig.2
SAFESA is one of a number of projects sponsored by the UK Government’s Department of Trade & Industry (DTI) as part of its Safety-Critical Systems Initiative. The aim of the project is to enable structural qualification to be carried out reliably and accurately using the FEA method in safety-critical situations by a ‘Best Practice’[1]. The philosophy of SAFESA is that of errormanagement; errors in the virtual testing process are identified, classified and treated. Fig.2 shows the SAFESA paradigm in simple terms. Firstly, the in-service structure is defined in terms of the loading environment, the response environment, the certification or qualification requirements, etc. Secondly, an idealised model is generated from the in-service structure which can be used to generate a finite element model, whilst acknowledging that this idealisation is a possible source of error. Thirdly, the finite element model is used to produce a set of responses that can be used to qualify the in-service structure. More errors are generated at this stage. The SAFESA process is used to analyse the errors at all stages of virtual testing such that the inservice structure can be qualified with confidence.
SAFESA was developed with FEA in mind although the method is portable to other testing procedures. The drivers for the project included:
1. The trend to reduce physical testing by virtual testing.
2. The reduction of costs via reduced design cycle time and the promotion of concurrency.
3. The ability to provide full transparent auditing.
4. The improved legal position provided by the audit trail.
ERROR TREATMENT
FEA is a process which attempts to make certain generalisations or assumptions of the real world when constructing a model, sources of which have the effect of introducing errors into the analysis process. This does not invalidate any results obtained from a finite element analysis provided a proper error control system is used. The general procedure for error control is as follows:
1. Identification and classification of the error.
2. Quantification of the error.
3. Treatment.
There has been much work done on the process of error classification [2, 3] and a four-level taxonometric system has arisen. The four classes of error are:
1. Modelling, or Idealisation, errors, caused by a lack of knowledge of the real structure and its environment.
2. Procedural errors, due to discretisation meshing and post-processing.
3. Formulation errors, created during the conversion of a model to an actual finite element problem ready for solution.
4. Solution errors, produced during the solution of the Finite Element problem.
These classes can be further broken down. Each constitutes an error source, for which various error treatment techniques are available. The goal of error treatment is to progressively reduce the error estimate to less than a predefined threshold value, as the idealisation process is redefined.
The error treatment techniques are:
1. Rules based on experience,
2. Scoping calculations,
3. Comparison with existing test results, 4. Hierarchical modelling (model improvement),
5. Sensitivity analyses.
The current development phase - the construction of a SAFESA-based expert system - aims to automate both the identification of errors and possible treatment strategies.
Full details of the SAFESA process, a detailed breakdown of each stage, example problems and a more comprehensive discussion of the philosophy have been published in three reference works: the “SAFESA Technical Manual” [4], the “SAFESA Quick Reference Guide” [5] and the “SAFESA Management Guidelines” [6].
SAFESA EXPERT ADVISORY SYSTEM
The initial phase of SAFESA relied on the engineer, who was to perform an analysis by following the SAFESA methodology, to identify sources of error and to flag them for later treatment. Once SAFESA has been defined and published as a ‘Best Practice’ the aim of the project is now to implement SAFESA as a computer-based Expert Advisory System (EAS) that will advise the users of FEA software on the correct approach to take so that the final analysis is valid, with well-defined error-bounds. Such an analysis might be accepted as a good repr
การแปล กรุณารอสักครู่..
