Complex Systems Theory and Applications

In addition to the other research programs, the ACCS included a number of projects addressing key problems for complex systems. The projects were concerned with the application of theory to solve issues in the design and operation of complex socio-technological systems, and with the development of new analysis techniques for complex systems.  Some of these projects were rolled into the ACCS when Centre operation commenced in 2004.



An empirically-derived experimentally-validated framework for interactions in information environments

Project Leader: Margot Brereton
Researchers: Brett Campbell, Jared Donovan, Tim Cederman-Haysom

Complex systems require new and innovative modes of interaction so that computer technologies better support human work practice. This project focussed on participatory design of multi-modal interfaces for the dental surgery. In dental surgeries, there is growing use of computer technology to store, display and update patient records, digital X-rays, preventative care information, multimedia simulations and patient billing. The dental surgery represents in many respects a typical work environment, one that is complex, replete with physical tools, people and information recording devices. However the increased embedding of traditional computer use into dental practice is problematic. The traditional interface of mouse and keyboard interferes with the way that dentists do their work. It poses both problems with infection control, and with the detailed level of attention that must be devoted to driving the interface. This suggests taking an approach using newer interfaces technologies such as gesture and speech recognition and other ubiquitous computing technologies. However although such technologies appear technically feasible, the most pressing problem for researchers is making them work in real world contexts. With a view to designing better interfaces for dental surgeries, we did a number of participatory design studies and introduced a number of technical probes into the dental surgery. The probes consist of common ubiquitous computing technologies such as digital pens, accelerometer based gestural devices, speech recognition and RFID tagging. The probes have been configured into technologies for use by dentists with a specific view to having them as configurable as possible so that dentists can (i) understand how they work (ii) understand the potential of the technologies (iii) experiment with the technologies, engage in a dialogue about how to support their practice in quick surgery studies.

During 2004 Margot Brereton presented this work in a Panel at CHI, the premiere international Human Computer Interaction conference in Vienna. The group also presented several papers at the Participatory design conference in Toronto, which is the leading international forum on Participatory Design. We have conducted ethnographic studies and participatory design explorations with technology probes in dental surgeries. RFID tags, digital pens, speech and gestural devices have been configured for dental practitioner use and tested in participatory discussions. We will bring our evaluations and theoretical framework to a conclusion in 2005.

In 2005, we completed a trial of our gestural and speech ubiquitous computing technologies with three New Zealand dentists who are lead users with a collaborating dental software company EXACT. We will present this work in the Aarhus Critical Computing Conference in August 2006. This work is now being analysed and written up as we develop our framework for designing interactions with ubiquitous computing.

  • Campbell, B., Brereton, M., "Designing to maintain human agency in context-aware systems", Proceedings of the Participatory Design Conference, 2004, 97-100.
  • Campbell, B., Brereton, M., "Maintaining human agency in the design of context-aware systems: Design games in a dental surgery", OzCHI 2004, 2004.
  • Cederman-Haysom, T., Brereton, M., "Bridging technical and HCI research: Creating usable ubiquitous  computing", OzCHI 2004, 2004.
  • Cederman-Haysom, T., Brereton, M., "Designing usable ubiquitous computing", Proceedings of the Participatory Design Conference, 2004, 101-104.
  • Donovan, J., Brereton, M., "Meaning in movement", Proceedings of the Participatory Design Conference, 2004, 163-166.

Computational group theory

Project Leader: George Havas
Researchers: Gilbert Baumslag, Colin Campbell, Marston Conder, Chuck Miller, Mike Newman, Eamonn O’Brien, Colin Ramsay, Edmund Robertson, Doug Troeger, Michael Vaughan-Lee

Group theory is a fundamental part of pure mathematics with diverse applications. Computational group theory addresses many problems. In this project, we studied computationally-based proofs in groups given by presentations. As an integral part of the research we aimed to design, implement, test, analyse and apply new algorithms for groups. We also aimed to develop metrics for evaluating the quality of proofs, with a view to addressing Hilbert’s “24th” problem which is finding criteria for determining simplest proofs.

During the year 2004, Havas presented six invited lectures at overseas conferences and universities. Jointly with Professor M.R. Vaughan-Lee of Oxford University, Havas resolved a long-standing problem about Engel groups, by proving that 4-Engel groups are locally nilpotent. A paper on this topic has been accepted to appear in the International Journal of Algebra and Computation. Three other papers on computational group theory appeared in 2004 in refereed journals.

During 2005, Havas presented four invited lectures at overseas conferences and universities. Jointly with Professor Charles Leedham-Green of the University of London, Professor Eamonn O'Brien of the University of Auckland and Professor Michael Slattery of Marquette University, Havas resolved a challenging problem about groups associated with geometries. A paper on this topic has been accepted to appear in the international journal Advances in Geometry. One paper on computational group theory appeared in 2005 in a refereed journal and six papers were accepted to appear in refereed journals or conference proceedings.

During 2006, Havas presented nine invited lectures at overseas conferences and universities. In July, he was a London Mathematical Society-supported visitor to four Universities in the UK. Two papers on computational group theory appeared in refereed journals in 2006 and two papers appeared in refereed conference proceedings. These papers solved questions about efficient presentations for groups, distinguished groups associated with geometries, and resolved a long-standing conjecture about an infinite family of groups related to trivalent graphs.

During 2007, work focused on the solution of problems related to the efficiency of infinite simple groups, on problems associated with an infi nite family of groups related to trivalent graphs, and on the Andrews-Curtis conjecture. Progress was made on converting proofs obtained by machine computation to more conventional human proofs.

During 2008 work focused on: the solution of problems related to the efficiency of finite simple groups; aspects of the Hughes’ conjecture about the index of certain subgroups; and the Andrews-Curtis conjecture. New efficient presentations were found for large simple groups not previously known to be efficient. Relatively small counter examples to the Hughes’ conjecture were found and explicitly constructed.

  • Baumslag, G., Cleary, S., Havas, G., "Experimenting with infinite groups", Experimental Mathematics, Vol. 13, 2004, 495-502.
  • Campbell, C.M., Havas, G., Robertson, E.F., “Addendum to an elementary introduction to coset table methods in computational group theory”, Groups St Andrews 1981 (revised edition), 2007; London Mathematical Society Lecture Note Series, Vol. 71, 361-364.
  • Campbell, C.M., Havas, G., Ramsay, C., Robertson, E.F., “On the efficiency of the simple groups with order less than a million”, Experimental Mathematics, Vol. 16, 2007, 347-358.
  • Campbell, C.M., Havas, G., Ramsay, C., Robertson, E.F., "Nice efficient presentations for all small simple groups and their covers", LMS Journal of Computation and Mathematics, Vol. 7, 2004, 266-283.
  • Conder, M., Havas, G., Ramsay, C., 'Efficient presentations for the Mathieu simple group M22 and its cover', Proceedings of the Conference Finite Geometries, Groups, and Computation, 2006, 33-42.
  • Havas, G., Lawrence, J.L., Ramsay, C., Street, A., Yazici, E., “Defi ning set spectra for designs can have arbitrarily large gaps”, Utilitas Mathematica, Vol. 75, March 2008, 65-79, 2008.
  • Havas, G., Leedham-Green, C., O'Brien, E., Slattery, M., 'Certain Roman and flock generalized quadrangles have nonisomorphic elation groups', Advances in Geometry, Vol. 6, No. 3, 2006, 389-395.
  • Havas, G., Leedham-Green, C., O'Brien, E., Slattery, M., 'Computing with elation groups', Proceeding of the conference Finite Geometries, Groups, and Computation, September 4-9, 2004, Pingree Park, Colorado, USA, 2006, 95-102.
  • Havas, G., Ramsay, C., “On proofs in finitely presented groups”, Groups St Andrews 2005, 2007; London Mathematical Society Lecture Note Series, Vol. 340, 457-474.
  • Havas, G., Robertson, E.F., Sutherland, D., “Behind and beyond a theorem on groups related to trivalent graphs”, Journal of the Australian Mathematical Society, Vol. 85, No. 3, December 2008, 323–32.
  • Havas, G., Robertson, E.F., Sutherland, D., 'The Fa,b,c conjecture is true, II', Journal of Algebra, Vol. 300, 2006, 57-72.
  • Havas, G., Vaughan-Lee, M.R., ‘4-Engel groups are locally nilpotent’, International Journal of Algebra and Computation, vol. 15, no. 4, 2005, 649–682.
  • Havas, G., Vaughan-Lee, M.R., “Computing with 4-Engel groups”, Groups St Andrews 2005, 2007; London Mathematical Society Lecture Note Series, Vol. 340, 475-485.
  • Havas, G., Vaughan-Lee, M.R., “On counter examples to the Hughes conjecture”, Journal of Algebra, 2009.

Emerging applications of advanced computational methods and discrete mathematics

Project Leader: Peter Adams
Researchers: Darryn Bryant, Melinda Buchanan, Barbara Maenhaut

Combinatorial computing plays an important role in visualising, modelling and solving a variety of important practical problems, particularly in the field of complex systems. In this project, we investigated approaches such as grid computing, evolutionary algorithms and refined theoretical approaches to enhance combinatorial searches. These are being used in a variety of problems, ranging from searches for combinatorial designs to applications in bioinformatics. Future developments in combinatorial computing techniques may have broader applications including understanding network-based systems in biology, engineering and economics.

There were nine journal articles published on the work of this project during 2004. These results predominantly represent developments in knowledge and understanding of combinatorial structures. In addition, we have published work on combinatorial representations and evolutionary algorithms used in computational drug discovery.

The research has lead to the development of computational tools which the participants have successfully applied in various areas, including open problems in graph decomposition, and various bioinformatics projects. There were four journal articles published during 2005, predominantly representing developments in knowledge and understanding of combinatorial structures. In addition, Adams gave an invited presentation at a Bioinformatics symposium in Malaysia.

In 2006, research has continued into the development of effective computational tools which can be applied in graph decomposition problems and to various bioinformatics projects. There were two journal articles accepted for publication during 2006, representing advances in the knowledge of small-order combinatorial graphs. There was also a book chapter that appeared, representing innovations in DNA sequencing technologies.

In 2007, we focused on development and application of computational tools in graph decomposition problems. There were two journal articles accepted for publication during 2007, representing advances in the knowledge of several classes of graph decompositions. One paper presents a comprehensive survey of existence results for small-order G-designs along with some new results, and the other considers a class of equitably 2-colourable small-order cycle decompositions.

This project concluded in 2008, with the final work being a comprehensive survey of known existence results of G-designs for a range of graphs G, including some new results for missing small-order graphs.

  • Adams, P., “Sequencing aided by mutagenesis facilitates the De Novo sequencing of megabase DNA fragments by short read lengths”, New High Throughput Technologies for DNA Sequencing and Genomics, Elsevier, 2007.
  • Adams, P., Bryant, D., Maenhaut, B., "Common multiples of complete graphs and a 4-cycle", Discrete Mathematics, Vol. 275, 2004, 289-297.
  • Adams, P., Bryant, D., Maenhaut, B., "Cube factorisations of complete graphs", Journal of Combinatorial Designs, Vol. 12, 2004, 381-388.
  • Adams, P., Bryant, D., Long, S., Smythe, M., Tran, T., "Virtual drug discovery using graph theory", Bulletin Institute Combinatorics and Applications, Vol. 40, 2004, 100-106.
  • Adams, P., Bryant, D., Waterhouse, M., “Some equitably 2-colourable cycle decompositions”, Ars Combinatoria, Vol. 85, 2007, 49-64.
  • Adams, P., Buchanan, M., Bryant, D., “A survey on the existance of G-designs”, Journal of Combinatorial Designs, Vol. 16, No. 5, September 2008, 373–410.
  • Adams, P., Eggleton, R., MacDougall, J., "Degree sequences and poset structure of order 9 graphs", Congressus Numerantium, Vol. 166, 2004, 83-95.
  • Adams, P., Eggleton, R., MacDougall, J., 'Graphs with a given degree sequence', Congressus Numerantium, 2006.
  • Adams, P., Eggleton, R., MacDougall, J., "Structure of graph posets for orders 4 to 8", Congressus Numerantium, Vol. 166, 2004, 63-81.
  • Adams, P., Eggleton, R., MacDougall, J., 'Taxonomy of graphs of Order 10', Congressus Numerantium, 2007.
  • Bryant, D., Grannell, M., Griggs, T., ‘Large sets of cycle systems on nine points’, Journal of Combinatorial Mathematics and Combinatorial Computing, vol. 53, 2005, 95–102.
  • Bryant, D., Horsley, D., Maenhaut, B., ‘Decompositions into 2-regular subgraphs and equitable partial cycle decompositions’, Journal of Combinatorial Theory, vol. 93, 2005, 67–72.
  • Bryant, D., Leach, C., Rodger, C., ‘Hamilton decompositions of complete bipartite graphs with a 3-factor leave’, Australasian Journal of Combinatorics, vol. 31, 2005, 331–336.
  • Bryant, D., Maenhaut, B., "Decompositions of complete graphs into triangles and Hamilton cycles", Journal of Combinatorial Designs, Vol. 12, 2004, 221-232.
  • Bryant, D., Maenhaut, B., Quinn, K., Webb, B., "Existence and embeddings of partial Steiner triple systems of order ten with cubic leaves", Discrete Mathematics, Vol. 284, 2004, 83-95.
  • Cochran, D., Lala, G., Keith, J., Adams, P., Bryant, D., Mitchelson, K., 'Sequencing by aligning mutated DNA fragments', The Frontiers of Biochip Technologies, Springer Verlag, 2006, 231-245.
  • Gamble, G., Maenhaut, B., Seberrey, J., Street, A., "Further results on strongbox secured secret sharing", Utilitas Mathematica, Vol. 66, 2004, 187-215.
  • Keith, J.M., Adams, P., Ragan, M.A., Bryant, D., ‘Sampling phylogenetic tree space with the generalised Gibbs sampler’, Molecular Phylogenetics and Evolution, vol. 34, 2005, 459–468.
  • Maenhaut, B., Wanless, I., "Atomic latin squares of order 11", Journal of Combinatorial Designs, Vol. 12, 2004, 12-34.

Efficient marking schedules for the short response paper of the Queensland Core Skills Test

Project Leader: Anne Street
Researchers: Ken Gray, Karen Harris, Colin Ramsay

Each year, approximately 35,000 Queensland students in the final year of high school, from all over the state, take a Core Skills Test, one paper of which is in Short-Response format. The schedule and resources are such that it is impossible to take samples of the students' scripts and to train the 650 markers well ahead of time, so that real data on how long each question will take to mark is available before the marking starts. Training markers is expensive, so we want to keep it to a minimum, as well as to produce an optimal match of markers to questions, and to ensure a common finishing time for all markers. A successful technique has been developed which supports dynamic assignments of resources, using a network to ensure that the loads are optimally balanced. This technique, due to Ken Gray, is also being developed for use in other problems, including some that arise in connection with testing.

In 2004, Ken Gray, Colin Ramsay and Anne Penfold Street (together with Karen Harris) have been carrying out a more detailed study of the technique for allocating marking so as to support dynamic assignments of resources, and are studying further the properties of such allocations.

In 2005, a more detailed study of the technique was carried out, with a constructive proof of the existence of complementary pairs of the relevant designs bringing this project to completion. The project studied how to compare two designs, and to identify how and where they differ. It also made progress on how much of an individual design is needed in order to identify it uniquely.

In 2006, a study of the designs relevant to the construction of marking schedules was undertaken.

  • Gray, K., Street, A., 'Defining sets', Handbook of Combinatorial Designs, 2, Chapman & Hall/CRC, Taylor & Francis Group, 2007, 382-385.
  • Gray, K., Street, A., 'On defining sets of full designs and of designs related to them', Journal of Combinatorial Mathematics and Combinatorial Computing, Vol. 60, 2007.
  • Gray, K., Street, A., Harris, K., Ramsay, C., ‘Existence of complementary pairs of proportionally balanced designs’, Utilitas Mathematica, 2005, 43–64.

Evolutionary computation using high performance computing facilities at UQ

Project Leader: Janet Wiles
Researcher: Scott Bolland

The aim of this project was to audit the high-performance computing resources available to ACCS members (including supercomputers and network clusters) and to create an online document detailing how to access them and run large-scale simulations in the area of evolutionary computation. The resulting resources can be found at

Intro to graph and network theory for complex systems researchers

Project Leader: Janet Wiles
Researcher: Kai Willadsen

This project involved the collation and presentation of ways in which network and graph-theoretic techniques can and have been applied in various areas of complex systems research. The primary focus of this project was to make relevant graph-theoretic approaches easily accessible to complex systems researchers, and to facilitate cross fertilisation of ideas on the use of graph-theoretic techniques between complex systems disciplines. In order to meet the above goals, this project was to produce tutorial material aimed at providing a basic introduction to the applications of graph theory to complex systems research, and reference materials to facilitate further exploration of the subject matter.

In 2004, this project produced a self-guided tutorial, a tutorial presentation and a set of web- accessible reference material. The tutorial and accompanying presentation provide an introduction to the general area of graph theory in complex systems. The reference materials provide more formal definitions of common terms and measurements, as well as providing information about their usage. This material is available from:

Network security

Project Leader: Janet Wiles
Researchers: Zhao Dong, Jonathon Hadwen

The aims of this project were to explore the use of complex systems analysis techniques to detect patterns in network traffic. A feasibility study was completed and research training provided (Summer Project 2003/04 funded by AusCERT

Multi-objective optimisation

Project Leader: Hussein Abbass
Researchers: Lam Bui, Hoang Xuan Dao, David Green, Tri Hai Le

When solving many real life problems, one is usually faced with two or more objectives that are in conflict, requiring the need for a compromise between the conflicting objectives. Multi-objective optimisation is about solving problems with conflicting objectives. In this project, we developed robust multi-objective optimisation techniques for decomposing and solving complex problems with many constraints and variables in the existence of noise.

In 2004, the research team developed two approaches for handling multi-objective optimisation problems. The first approach, published in PPSN 2004, is an interactive method for solving multi-objective optimisation through a reciprocal set of actions between the decision maker and the computer. The second approach, under review, is looking at fast algorithms for handling noise multiobjective optimisation problems.

In 2005, the research team developed methods for optimisation problems in changing environments using multi-objective approaches. We also developed a fast method, using the concept of fitness inheritance, for optimisation problems with noise. All methods are published in GECCO 05 and CEC 2005. (Included two 2005/06 summer projects)

In 2006, our work on multi-objective optimisation continued to be very successful with new efficient methods for handling large-scale complicated multiobjective black-box optimisation problems. Most of the work is submitted for journal publications, hence the small number of papers appearing in 2006. We anticipate in 2007 to have published most of this work and advanced the research in this area further.

In 2007, we have succeeded in introducing a distributed framework of local models using explicit niching as an overarching umbrella to solve multi-objective optimisation problems (MOPs). The concept behind the framework is for the search to be decomposed and to be conducted locally in different areas of the decision search space (creating local models), which allows the local models to be distributed on different processing nodes. The interaction among models is controlled and utilised by simple rules inspired from well-known paradigm: Particle Swarm Optimisation. The framework has been validated in both noisy and noise-free environments. Several important parts of the obtained results have been published (or accepted) by two Springer journals with high ISI impact factors. Dr Lam Bui successfully completed his PhD and his PhD thesis was awarded a prize for the best thesis in Information Technology at UNSW@ADFA.

In 2008 we finalised our work on localisation. Our innovative proposed method to solve multi-objective problems through the use of local models and applying dynamic forces to control the movements of the models, proved to be an efficient way for solving these problems in environments with high noise.

  • Abbass, H., Green, D., "Performance analysis of evolutionary multi-objective optimization methods in noisy environments", Proceedings of the 8th Asia Pacific Symposium on Intelligent and Evolutionary Systems, 2004, 29-39
  • Bui, L., Abbass, H., Essam, D., ‘Cooperative coevolution of genotype-phenotype mappings to solve epistatic optimization problems’, Recent Advances in Artificial Life, World Scientific Publishers, 2005, 29–42.
  • Bui, L., Abbass, H., Essam, D., ‘Fitness inheritance for noisy evolutionary multi-objective optimization’, Proceedings of the Genetic and Evolutionary Computation Conference GECCO’05, 2005, 779–785.
  • Bui, L., Alam, S., Multi-objective optimisation in computational intelligence—Theory and practice, IGI Global, 2008.
  • Bui, L., Branke, J., Abbass, H., ‘Diversity as a selection pressure in dynamic environments’, Proceedings of the Genetic and Evolutionary Computation Conference GECCO’05, 2005, 1557–1558.
  • Bui, L., Branke, J., Abbass, H., ‘Multiobjective optimization for dynamic environments’, Congress on Evolutionary Computation, 2005.
  • Bui, L., Deb, K., Abbass, H., Essam, D., 'Dual guidance in evolutionary multi-objective optimization by localization', 6th International Conference on Simulated Evolution and Learning, 2006; LNCS, Vol. 4247, 384-391.
  • Bui, L., Deb, K., Abbass, H., Essam, D., “Interleaving guidance in evolutionary multi-objective optimisation”, Journal of Computer Science and Technology, Vol. 23, No. 1, 2008, 44–63.
  • Bui, L., Soliman, O., Abbass, H., “A modifi ed strategy for the constriction factor in particle swarm optimisation”, Progress in Artificial Life, December 2007; Lecture Notes in Computer Science, Vol. 4828, 333-344.
  • Teo, J., Abbass, H., ‘Multi-objectivity and complexity in embodied cognition’, IEEE Transactions on Evolutionary Computation, vol. 9, no. 4, 2005, 337–360.
  • Yang, A., Abbass, H., Sarker, R., 'Land combat scenario planning: A multiobjective approach', 6th International Conference on Simulated Evolution and Learning, 2006; LNCS, Vol. 4247, 837-844.

Automatic problem decomposition

Project Leader: Hussein Abbass
Researchers: Daryl Essam, David Goldberg, Chris Lokan, Robert McKay, Kumara Sastry

Solving many real life problems is a complex task. The number of elements and factors in each problem is enormous and the only way to solve these problems reliably, quickly and accurately is by decomposing them into smaller sub-problems. Unfortunately, when we are faced with a new problem, we do not usually know the correct decomposition. This project involved automatically decomposing a problem on the fly while solving it.

In 2004, we were able to establish a very efficient methodology for decomposing a special type of blackbox optimization problems. We then identified a special class of optimization problems in a changing environment, where changes are structural changes. In this class, we extended the methodology to track optima in a changing environment. The proposed method for changing environment can handle types of changes that previous methods in the literature cannot.

In 2005, the research team continued developing efficient algorithms for hard problems with a paper appearing in GECCO 2005 and another submitted for review.

In 2006, our focus was placed on automatic decomposition of classification problems. Dr Minh Ha Nguyen completed her PhD and was awarded her PhD with a recommendation from one of her examiners that the thesis deserves a prize. Some of this work has not appeared in publication yet, with a paper accepted for publication in IEEE SMC part C to appear in 2007. Other publications related to this area were an invited review chapter on probabilistic model-building genetic programming and projects starting in the area of supply chain management

In 2008 we introduced a number of ways to automatically decompose machine learning problems and our work on this problem from previous years started to appear in high quality journals. For example, the work with Nguyen on cooperative co-evolution and mixture of experts appeared in IEEE Trans on SMC-C, while the work with Dam and Pornthep was accepted (to appear in 2009) at the IEEE Transactions on Neural Networks.

  • Nguyen, M., Abbass, H., McKay, R., “Analysis of CCME: Co-evolutionary dynamics, automatic problem decomposition and regularisation”, IEEE Transactions on Systems, Man, Cybernetics—Part C, Vol. 38, No. 1, 2008, 100–109.
  • Nguyen, M., Abbass, H., McKay, R., 'A novel mixture of experts model based on cooperative coevolution', Neurocomputing, 2006, 155-163.
  • Sarker, R., Freeman, G., Kara, S., Kayis, B., Ray, T., Abbass, H., 'A multi-agent approach for analyzing material flow in a manufacturing supply chain', The Second International Intelligent Logistics Systems Conference, 2006.
  • Sastry, K., Abbass, H., Goldberg, D., "Sub-structural niching in non-stationary environments", AI 2004: Advances in Artificial Intelligence, December 2004; LNCS, Vol. 3339, 873-885.
  • Sastry, K., Abbass, H., Goldberg, D., Johnson, D., ‘Sub-structural niching in estimation of distribution algorithms’, Proceedings of the Genetic and Evolutionary Computation Conference GECCO’05, 2005, 671–678.
  • Shan, Y., McKay, R., Essam, D., Abbass, H., 'A survey of probabilistic model building genetic programming', Scalable Optimization via Probabilistic Modelling From Algorithms to Applications, Studies in Computational Intelligence, Vol. 33, 1, Springer Verlag, 2006.

Software architecture and scale-free networks

Project Leader: Geoff Dromey
Researcher: Lian Wen

This project studied the evolution and topological structure of large software systems and their relationships with scale-free networks. We have found that the component architecture of all the tested Java packages is scale-free, and that a close relationship exists between optimised sorting algorithms and scale-free networks. This will lead to practical methods by which to control and manage the architecture of large software systems, as well as encouraging further research into their evolution.

In 2005, the major achievement has been the development of a suite of graphics-based tools for studying the architectural properties (including scale-free network studies) of the implementations of large-scale software systems and packages. A tool-set has also been developed for studying the scale-free properties of sorting algorithms of differing relative efficiencies. An important observation has been that the closer an algorithm approaches the N log N limit the more likely it is to exhibit scale-free properties.

In 2006, we have drafted two papers on our earlier work on the study of scale-free properties of sorting algorithms and scale-free properties of large software libraries. Results of this work were presented at a Workshop on Complex Systems held at the Warren Centre in Sydney in April 2006. Part of this work has also been written up in Lian Wen's PhD thesis which was submitted for examination in late 2006.

In 2007, we built a tool to study software systems as complex networks. In our paper we suggest ways of controlling and changing how systems evolve to make them easier to understand and maintain. We are also currently revising a paper titled ‘Software engineering and scale-free networks’ for publication in IEEE Transaction on Systems, Man and Cybernetics.

  • Wen, L., Kirk, D., Dromey, R.G., “Software systems as complex networks”, The 6th IEEE International Conference on Cognitive Informatics, 106-115, 2007.

Adaptive network-centric multi-agent architecture for land combat

Project Leader: Hussein Abbass
Researchers: Ruhul Sarker, Ang Yang

Land combat is a complex adaptive system. The aim of this project was to develop a multi-agent system for land combat to help understand the dynamics of this complex system and potentially map lessons learnt in the defence domain to other projects such as those related to game theory in economics. ACCS provided partial funding for this project in the form of financial assistance for training the student, with the main source of funding coming from UNSW.

In 2005, a multi-agent system called WISDOM, the warfare intelligent system for dynamic optimisation of missions, has been developed. The system is based on a new multi-agent architecture that we developed. The architecture is network-centric, where it combines network theory, dynamic systems, data mining, and distributed AI in a single architecture that is scalable and efficient. A number of papers have been published on this topic in high impact journals, conferences, and book chapters.

In 2006, Ang Yang submitted his PhD on the topic. The WISDOM system developed in his PhD thesis was used by a number of DSTO colleagues. We developed methodologies for fitness landscape analysis of combat games (see IEEE SMC-B paper). We also collaborated with Dr KC Tan and used some of our models in studying civil violence using a game theory. Last, but not least, we continued our collaboration with DSTO on optimising fleet mixes.

  • Abbass, H., Baker, S., Bender, A., Sarker, R., “Anticipating future army supply chain fleet capabilities”, Proceedings of the 2007 SimTecT, June 2007.
  • Abbass, H., Baker, S., Bender, A., Sarker, R., 'Identifying the fleet mix in a military setting', The Second International Intelligent Logistics Systems Conference, 2006.
  • Quek H., Tan, K., Goh C., Abbass, H., 'Modeling civil violence: An evolutionary multi-agent, game theoretic approach', IEEE Congress on Evolutionary Computation, 2006, 1624-1631.
  • Yang, A., Abbass, H., Barlow, M., Sarker, R., Curtis, N., ‘Evolving capability requirements in WISDOM-II’, Recent Advances in Artificial Life, vol. 3, World Scientific Publishers, 2005.
  • Yang, A., Abbass, H., Sarker, R., 'Characterizing warfare in red teaming', IEEE Transactions on Systems, Man and Cybernetics - Part B, Cybernetics, Vol. 36, No. 2, 2006, 268-285.
  • Yang, A., Abbass, H., Sarker, R., ‘Evolving agents for network centric warfare’, Proceedings of the 2nd Workshop on Military and Security Applications of Evolutionary Computation (MSAEC 2), 2005; Second Workshop on Military and Security Applications of Evolutionary Computation, Washington D.C.
  • Yang, A., Abbass, H., Sarker, R., ‘Risk assessment of capability requirements using WISDOM-II’, The International Society for Optical Engineering (SPIE), Microelectronics, MEMS, and Nanotechnology Symposium, 2005.
  • Yang, A., Abbass, H., Sarker, R., ‘WISDOM-II: A network centric model for warfare’, Knowledge-Based Intelligent Information and Engineering Systems, 9 2005; Lecture Notes in Computer Science, vol. 3683, 813.

Adaptive network mining for security and safety

Project Leader: Hussein Abbass
Researchers: Michael Barlow, Daryl Essam, Yubin Yang

Network mining is a novel and new concept in data mining. Traditionally, data mining stood shorthanded when faced with a small dataset. Moreover, there have been many assumptions which were necessary to build the theory of data mining techniques, but unfortunately these assumptions were impractical. Take for example, the assumption of independency, where the values a field in a database table can take are independent of each other and also that the records are independent. Consider two fields in a database table: the name of employees and their telephone numbers. Let us assume two employees, John and Mark. Traditionally, these two values are considered entirely independent of each others. However, if both employees have the same telephone number, this assumption deems quickly to be invalid. It also implies that the two database records are somehow dependent. As such, assuming independency of data records and values within fields makes theoretical proofs easier, but certainly increases the demand for more data and traditional data mining techniques will certainly overlook and miss many interesting patterns. In this project, we aimed to  develop a software infrastructure for network mining that could potentially be used for security and safety applications such as analysing flight accidents.

We started this project in 2005 and produced a software named DeepRed. DeepRed is capable of finding deep linkages in a dataset. It uses link analysis to identify linkage information. It uses Bayesian inference to estimate the importance of links and nodes in a graph. It analyse the temporal evolution of a network through efficient methods that combine social network measures and time series analysis. We extended the funding from the ARC through an ADFA strategic grant which helped us to improve the system further and develop further theory. A paper was published on analysing terrorist networks in the SPIE complex systems conference.

In 2006, we continued and expanded our work in the security domain. One of the interesting studies we undertook was in collaboration with Dr. Eleni Petraki from the University of Canberra on understanding counterfeiting and developing strategies to mitigate the risk arising from counterfeiting crimes and its impact on security.

  • Abbass, H., Barlow, M., Essam, D., Yang, Y., ‘Connecting the dots to disconnect them: A study into network evolution and dynamics for analyzing terrorist networks’, Complex Systems, 2005.
  • Barlow, M., Yang, A., Abbass, H., “A temporal risk assessment framework for planning a future force structure”, Proceedings of the 2007 IEEE Symposium on Computational Intelligence for Security and Defense Applications, April 2007.
  • Kirley, M., Abbass, H., McKay, R., 'Diversity mechanisms in Pitt-style evolutionary classifier systems', Data Mining and Discovery Approaches Based on Rule Induction Techniques, Massive Computing series, Vol. 6, Springer Verlag, 2006, 433-458.
  • Petraki, E., Bo, Z., Abbass, H., 'Mitigating the threat of counterfeiting to international security through public awareness', The Security Technology Conference, the 5th Homeland Security Summit, 2006, 190-207.
  • Yang, A., Abbass, H., Sarker, R., "Landscape dynamics in multi-agent simulation combat systems", AI 2004: Advances in Artificial Intelligence, December 2004; LNCS, Vol. 3339, 39-50.

Emergence of communication in multi-agent systems

Project Leader: Hussein Abbass
Researchers: Michelle McPartland, Stefano Nolfi

Communication is at the heart of many systems. Understanding how communication may emerge between agents in an unknown environment can shed light on key fundamental scientific questions as well as providing solutions for real life systems such as free-flight air traffic control. Key fundamental questions include: what are the minimum setup and conditions for communication to emerge, when would communication be considered beneficial for agents, and what are the advantages of indirect versus direction communication?

In 2005, we created a competitive environment with two teams of agents undertaking an exploration task - the quickest team to explore the largest area won. One team used indirect communication, controlled by an artificial neural network evolved using a Pareto multi-objective approach. The second team used direct communication and a fixed strategy for exploration. We found that different communication strategies result in different exploration strategies. We established that communication is essential for certain type of cooperative behaviour to emerge. Results of this project were published in GECCO 2005.

  • McPartland M., Nolfi , S., Abbass, H., ‘Emergence of communication in competitive multi-agent systems: A Pareto multi-objective approach’, Proceedings of the Genetic and Evolutionary Computation Conference GECCO’05, 2005, 51–58.

Agent-based virtual insects: Extension and testing

Project Leader: Jim Hanan, Peter Robinson
Researcher: Katherine Duczmal

A Qu-Prolog based application-specific language for modelling insect behaviour has been prototyped by Katherine Duczmal in her honours project. Three dimensional models of plants expressed using L-systems provide the environment for insect perceptions. This project extends the language to meet the needs of entomological end-users. (2005/06 summer project)

  • Duczmal, K., ‘Agent-based virtual insects’, Proceedings of the Third Australian Undergraduate Students' Computing Conference, 2005, 1–8.

Application of Grid computing to complex systems modelling

Project Leader: David Abramson
Researchers: Blair Bethwaite, Colin Enticott, Slavisa Garic, Tom Peachey

Over the last several years, combinations of super-computers, or Grids, have been developed which couple geographically distributed resources such as high-performance computers, workstations, clusters of computers, and scientific instruments. Grids such as the US-based TeraGrid have begun to provide the infrastructure to support global collaboration in science and engineering in ways that were not previously possible. Many complex systems models have enormous resource requirements. In this project we investigated the application of Grid computing to these complex systems models, and illustrated the utility of this approach. In particular, we applied and further developed a number of Grid specific methodologies and tools.

During 2006, we ran a training course for ACCS members on using Grid computing, in particular, our Nimrod tools for supporting complex systems research. This opened up opportunities in a number of projects in areas of economics simulation and optimisation, air traffic control, ecology and genetic regulatory networks.

In 2007, we explored the use of Grid computing in designing optimal energy management strategies. This has resulted in a new project devoted to this activity (see page 18 - ‘Optimal allocation of embedded renewable electricity sources throughout the distribution network’). We are currently implementing the necessary infrastructure to allow us to run an industry energy modelling package on distributed Windows-based computers.

In 2008 we built infrastructure that would allow us to perform sensitivity studies on the Australian electricity distribution network. This required integration of the Plexos modelling system with the Nimrod parameter sweep tool. Because Plexos only runs on Windows, we ported the Nimrod to Windows, which required modifying the Nimrod ‘Agent’ code to allow it to run under the Cygwin environment. We tested the system on a small study, but were not able to launch any large scale experiments due to lack of licenses for Plexos. This latter restriction is currently being investigated by the vendor.

  • Ayyub, S., Abramson, D., “GridRod - a service oriented dynamic runtime scheduler for Grid workflows”, Proceedings of the 21st ACM International Conference on Supercomputing, June 2007.
  • Ayyub, S., Abramson, D., Enticott, C., Garic, S., Tan, J., “Executing large parameter sweep applications on a multi-VO testbed”, Proceedings of the Seventh International IEEE Symposium on Cluster Computing and the Grid, May 2007.
  • Kurniawan, D., Abramson, D., “A WSRF-compliant debugger for Grid applications”, Proceedings of the IEEE International Parallel and Distributed Processing Symposium, March 2007.
  • Kurniawan, D., Abramson, D., “An integrated Grid development environment in Eclipse”, Proceedings of the 3rd IEEE International Conference on e-Science and Grid Computing, December 2007.
  • Zheng, C., Katz, M., Papadopoulos, P., Abramson, D., Ayyub, S., Enticott, C., Garic, S., Goscinski, W., Arzberger, P., Lee, B., Phatanapherom, S., Sriprayoonsakul, S., Uthayopas, P., Tanaka, Y., Tanimura, Y., Tatebe, O., “Lessons learned through driving science applications in the PRAGMA grid”, The International Journal of Web and Grid Services, Vol. 3, No. 3, 2007, 287-312.

Experimental design

Project Leader: Anne Street
Researchers: Diane Donovan, Ken Gray, George Havas, Marks Nester, Colin Ramsay

Experimental designs can be thought of as arrangements of elements of a set into subsets with predetermined properties desirable for particular applications in particular situations. Such arrangements are characterised and described by various factors, including: their current and potential fields of application, such as in cryptographic protocols, in planning sample surveys, clinical experiments or marking schemes for tests, and so on; their properties; their size and complexity; and their origins, possibly from algebraic structures, from finite geometries, from computer search, or from some combination of these. Each such factor is a potential subject of interest in its own right, but the research in this project aimed to identify and address problems and complexities that arise when several, if not all, of these factors are in play. These arrangements are often conveniently represented by graphs, requiring the application of graph theory for many examples and applications.

In 2007, we continued with earlier work on secret sharing schemes, especially on problems with the use of minimal defining sets for secret sharing; Sudoku squares; critical sets in Latin squares; applications of partitions of full designs of triples into small planes; and computational techniques for investigating simple groups which are likely to be applicable in these areas.

In 2008 we extended our earlier work in most of these areas, as well as considering applications of balanced sequential arrays in forestry. This related to designing layouts for experimental forests.

  • Gray, K., Street, A., “Constructing defining sets of full designs”, Utilitas Mathematica, Vol. 76, 2008, 91–99.
  • Gray, K., Street, A., “Defining sets”, Handbook of Combinatorial Designs, Edited by Colbourn, C., Dinitz, J., 2nd Edition, Chapman & Hall/CRC, Taylor & Francis Group, 2007, 382-385.
  • Gray, K., Street, A., “On defining sets of full designs and of designs related to them”, Journal of Combinatorial Mathematics and Combinatorial Computing, Vol. 60, 2007, 97-104.
  • Gray, K., Street, A., “Two regular solids holding the twelve (6,3,2) designs”, Bulletin Institute Combinatorics and Applications, 2009.
  • Havas, G., Lawrence, J.L., Ramsay, C., Street, A., Yazici, E., “Defining set spectra for designs can have arbitrarily large gaps”, Utilitas Mathematica, Vol. 75, March 2008, 65–79.
  • Mathon, R., Street, A., Gamble, G., “Classification of partitions of the set of all triples on ten points into copies of Fano and affine planes”, Discrete Mathematics, Vol. 308, 2008, 2789–95.
  • Street, A., Nester, M.R., “An introduction to balanced sequential arrays on the square”, Mathematica Slovaca, Vol. 59, No. 2, 2009, 77–192.

Modelling fear conditioning in rats

Project Leader: Robert Colvin
Researchers: Geoff Dromey, Martyn Symons

Fear conditioning involves the pairing of a neutral stimulus (e.g. a tone) with an aversive stimulus (e.g. a footshock). After a number of pairings, the neutral stimulus elicits the behavioural and physiological responses normally seen after exposure to the aversive stimulus. The neural pathways involved in the formation of these fear memories have been studied extensively. However, by providing a detailed computer model of the neural networks underlying fear conditioning, this project aimed to provide a rapid and powerful way of testing hypotheses and direct further experiments into fear conditioning being performed at the Queensland Brain Institute (QBI).

In 2006, two Behaviour Tree models were developed, one giving the high-level, observable behaviour of a rat under a fear conditioning experiment, and the other giving a more detailed description of the neurological processes involved.

In 2008 a $35,000 UQ Early Career Researcher grant was obtained for this project. Martyn Symons was employed, and created four models of fear conditioning using Behavior Trees. In collaboration with researchers at the QBI, these models were simulated and analysed. Preliminary work began on creating more detailed models using neural networks.

Distributed computing for interactive modelling research

Project Leader: Janet Wiles
Researchers: Jon Kloske, Bernard Pailthorpe, James Watson

The goal of this project was to develop a system to test the viability of using undergraduate laboratories for real-time research. The motivation for this project was that in complex systems modelling (1) the interactive exploration of large parameter spaces is fundamental to understanding system dynamics, and (2) the simulation development process involves iterative testing and refinement cycles. A single trial of one of our complex systems models typically takes several minutes to run. Therefore it is feasible to test a small number of options within a session on a single machine, but batched runs must be performed to analyse larger parameter sets and systems. When macro-scale effects need to be tested over large parameter sets, using batched runs slows the identification of interesting results and the refinement of the computational models. Distributed computing is a solution that enables interactive exploration of parameter spaces where the computational load is the limiting factor. This project was part-funded by the Queensland Cyber Infrastructure Foundation (QCIF).

In 2007, a software prototype for interactive distributed computing was developed by James Watson in collaboration with Jon Kloske. A software prototype was developed and tested in the IT&EE student labs at The University of Queensland, which successfully demonstrated the feasibility of interactive, distributed computing for complex systems models. The software continues to run in those labs. A summary report is published on the QCIF website at: au/research/Reports/InterModelFinalReport. This work was presented at Oxford at the Seventh International Workshop on Information Processing in Cells and Tissues.

  • Watson, J., Maetschke, S., Wiles, J., “Dsweep: A lightweight tool for distributed parameter sweeps”, Proceedings of the Seventh International Workshop on Information Processing in Cells and Tissues, August 2007.
  • Watson, J., Wiles, J., “QCIF report: An on-site survey of high performance computing”, Queensland Cyber Infrastructure Foundation Technical Report, November 2007.

High performance complex systems simulation project

Project Leader: David Abramson
Researchers: Colin Enticott, Tom Peachey

In-silico experimentation is increasingly being used to understand the behaviour of complex systems when it is not possible to perform real world experiments. For example, a computer simulation of an electrical distribution grid, a genetic regulatory network, an air traffic control system or a healthcare system, might model the behaviour of that system under particular conditions that cannot be tested in the real. Running this model might show what happens when certain input conditions are present. However, to fully understand the dynamics of the system, it is necessary to explore what happens when many inputs change. Moreover, to get results that are statistically significant, it might be necessary to run the model many times with different initial conditions. This can require enormous amounts of computing time. Recently, combinations of super-computers, or Grids, have been developed which couple geographically distributed resources such as high-performance computers, workstations, clusters of computers, yielding potentially very large distributed supercomputers. In previous research we developed a methodology (parametric modelling) and software environment (Nimrod [1]) for performing very large in-silico experiments using Grids. In this project we aimed to apply parametric modelling to a range of ACCS supported complex systems modelling experiment; and augment Nimrod with new capabilities for exploring complex systems.

Factorial and fractional factorial designs are the most commonly used experimental designs, but have been under used in computer experiments. To facilitate these techniques, in 2007 we developed tools that automatically generate fractional factorial designs, perform the experiment, and then provide an analysis of the results. The tools interface with an existing tool called Nimrod/G, which is used to organise the execution of the model using the parallelism offered by a computational grid. The new system, known as Nimrod/E, augments the existing Nimrod tool chain.

In 2008 we further developed the Nimrod/E tool chain, building components that generate designs, and display parameter sensitivity using a variety of statistical graphing techniques. We tested the system on some bioengineering models of the heart, and demonstrated the effectiveness of the system for complex systems design.

  • Peachey, T., Diamond, N., Abramson, D., Sudholt, W., Michailova, A., Amirriazi, S., “Fractional factorial design for parameter sweep experiments using Nimrod/E”, Journal of Scientific Programming, Vol. 16, No. 2&3, 2008.
  • Sher, A., Abramson, D., Enticott, C., Garic, S., Gavaghan, D., Noble, D., Noble, P., Peachey, T., “Incorporating local Ca2+ dynamics into single cell ventricular models using Nimrod/O”, International Conference of Computational Science (ICCS), Lecture Notes in Computer Science, Vol. 5101, 66–75.

General  publications

  • Abbass, H., 'Learning regularities and patterns using probabilistic finite state machines', Complexity International, 2007.
  • Abbass, H., 'An economical cognitive approach for biobjective optimization using bliss points and interaction', Soft Computing, Vol. 10, No. 8, 2006, 687-698.
  • Abbass, H., "An inexpensive cognitive approach for biobjective optimization using Bliss points and interaction", The 8th International Conference on Parallel Problem Solving from Nature (PPSN VIII), 2004; LNCS, Vol. 3242, 712-721.
  • Banks, J., Pailthorpe, B., Rothnagel, R., Hankamer, B., ‘Automatic particle picking algorithms for high resolution single particle analysis’, Proc. APRS Workshop on Digital Image Computing, 2005, 127-131.
  • Bolliger, J., Lischke, H., Green, D., ‘Simulating the spatial and temporal dynamics of landscapes using generic and complex models’, Ecological Complexity, Vol. 2, no. 2, 2005, 107-116.
  • Bordes, N., Pailthorpe, B., "High resolution scalable displays: Manufacturing and use", Proceedings. Information Visualisation, 2004, 151-156.
  • Connor, J., Symons, M., Feeney, G., Young, R.M., Wiles, J., 'The application of machine learning techniques as an adjunct to clinical decision making in alcohol dependence treatment: A preliminary study.', Journal of Substance Use and Abuse, 2006.
  • Dam, H., Lokan, C., Abbass, H., “Evolutionary online data mining: an investigation in a dynamic environment”, Evolutionary Computation in Dynamic and Uncertain Environments, Studies in Computational Intelligence, Vol. 51, Springer, 2007.
  • Green, D., The Serendipity Machine, Allen and Unwin, 2004.
  • Green, D., McKay, B., Namatame, A., Proceedings of the 8th Asia Pacific Symposium on Intelligent and Evolutionary Systems, Monash University, 2004.
  • Green, D., Klomp, N., Rimmington, G.R., Sadedin, S., Complexity in Landscape Ecology, Landscape Series, Vol. 4, Springer, 2006.
  • Green, D., Bransden, T., 'Complexity Theory', The McGraw-Hill Encyclopedia of Science & Technology,
  • McGraw-Hill, 2006, 507-511.
  • Grunske, L., Neumann, R., Process components for quality evaluation and quality improvement", in Second Workshop on Method Engineering for Object-Oriented and Component-Based Development, OOPSLA 2004, 51-62.
  • Hai, D., Abbass, H., Lokan, C., ‘DXCS: an XCS system for distributed data mining’, Proceedings of the Genetic  and Evolutionary Computation Conference GECCO’05.
  • Hai, D., Abbass, H., Lokan, C., ‘Be Real! XCS with Continuous-Valued Inputs’, The Eighth International Workshop on Learning Classifi er Systems, 2005.
  • Kanagarajah, A.K., Lindsay, P., Miller, A., Parker, D., 'An exploration into the uses of agent-based modeling to improve quality of health care', International Conference on Complex Systems, July 2006.
  • Martin, R., Bordes, N., Pailthorpe, B., "Semi-automatic feature delineation in medical images", Proceedings. Information Visualisation, 2004, 127-132.
  • McLachlan, G., Do, K., Ambroise, C., Analyzing Microarray Gene Expression Data, Wiley, 2004.
  • McLachlan, G., Khan, M., "On a resampling approach for tests on the number of clusters with mixture model-based clustering of tissue samples", Journal of Multivariate Analysis, Vol. 90, 2004, 90-105.
  • McLachlan, G., Chang, S., "Mixture modelling for cluster analysis.", Statistical Methods in Medical Research, Vol. 13, 2004, 347-361.
  • McLachlan, G., Bean, R., "Contribution to the discussion of paper by J. Friedman and J. Meulman.", Journal of the Royal Statistical Society B, Vol. 66, 2004, 846.
  • Meyer, B., "On the convergence behaviour of ant colony search", Proceedings of the 7th Asia-Pacific Complex Systems Conference, 2004, 153-167.
  • Ng, S., McLachlan, G., "Speeding up the EM algorithm for mixture model-based segmentation of magnetic resonance images", Pattern Recognition, Vol. 37, 2004, 1573-1589.
  • Ng, S., McLachlan, G., "Applying the EM algorithm in training neural networks: misconceptions and a new algorithm for multiclass classification", IEEE Transactions on Neural Networks, Vol. 15, 2004, 738-749.
  • Ng, S., McLachlan, G., Yau, K., Lee, A., "A survival mixture model adjusting random hospital effects for analysing", Statistics in Medicine, Vol. 23, 2004, 2729-2744.
  • Ng, S., Krishnan, T., McLachlan, G., "The EM algorithm", Handbook of Computational Statistics, Vol. 1, Springer Verlag2004, 137-168.
  • Nguyen M.H., Abbass H.A., McKay R. “Analysis of CCME: Co-evolutionary dynamics, automatic problem decomposition and regularization, IEEE Transactions on Systems, Man, Cybernetics, Part C, Vol. 38, No. 1, 100-109, 2008.
  • Nguyen, M., Abbass, H., McKay, R., ‘Stopping criteria for ensemble of evolutionary artifi cial neural networks’, Applied Soft Computing, Vol. 6, No. 1, 2005, 100-107.
  • Pailthorpe, B., Bordes, N., "Ultra-resolution display systems for computational science and engineering", 2004; Proceedings of SPIE, Vol. 5443.
  • Seebeck, L., Kaplan, S., "Understanding time in system design", Proceedings of the 8th World Conference on Systemics, Cybernetics and Informatics, 2004.
  • Shafi , K., Abbass, H., “Biologically-inspired complex adaptive systems approaches to network intrusion detection”, Information Security Technical Report, Vol. 12, No. 4, 2007, 209-217.
  • Soliman, O., Bui, L., Abbass, H., “The effect of a stochastic step length on the performance of the differential evolution algorithm”, Proceedings of the 2007 IEEE Congress on Evolutionary Computation, September 2007; 2850-857.

World-class basic and applied inter-disciplinary research on questions fundamental to understanding, designing and managing complex systems
© 2009 The ARC Centre for Complex Systems, Australia