O |
bjects |
|
R |
ules |
|
C2 |
onstraints and |
|
omponents for |
||
A |
gents
and |
|
S |
imulations |
Started in 2003, the ORC2AS project aims to build a comprehensive, easy to use and reuse framework of Automated Reasoning (AR) components to dramatically lower the cost of engineering autonomous agents, Multi-Agent Simulations (MAS) or any software service which value is primarily derived from embedded AR. By "easy to use" I mean easy to learn, test, understand, interoperate with other software and deploy on standard platforms. By "easy to reuse", I mean easy to customize, debug and extend through assembly. Current state of the art AR software largely fails on all these accounts. This failure is the main stumbling block that prevents transferring the most advanced AR technology from research to profitable industrial applications. The goal of ORC2AS is to make this transfer cost-effective.
It is based on three key ideas:
1. Systematically reflect in AR software architecture, the conceptual reuse that pervades AR techniques;
2. In particular, exploit the fact that deduction, abduction, default reasoning, belief revision, belief update, planning, constraint solving and optimization (in logic, plausibilistic and probabilistic settings) have been shown to be easily reformulated as special cases of rule-based Constraint Programming (CP);
3. Thus,
integrate in synergy: (a) Model-Driven Engineering (MDE) based on formal
models, meta-models, and Model Transformations (MT) of objects, aspects,
components, product lines and agents, with
(b) CP based on the assembly of component-encapsulated object-oriented logical
rule bases.
Building the ORC2AS framework involves investigating and integrating the following research dimensions:
1. Software process to define the artifacts, activities and roles to engineer Inference Engines (IE), Knowledge Bases (KB), Reasoning Components (RC) pairing one IE with one KB, AR applications and agents assembling RC with other components following architectural patterns, and MAS assembling agents following social interaction patterns;
2. Language design to define the modeling, meta-modeling, MT, knowledge representation and programming languages to support the engineering and deployed execution of both the framework and the applications that reuse it;
3. Formal foundations to define the formal semantics of the above languages and establish that the MT translations among them are semantic preserving;
4. CASE tools, to provide automated services supporting the application of the process using the languages;
5. Built-in components to provide efficient, user-friendly, basic AR building blocks on top of which more sophisticated RC and applications can be quickly and cheaply assembled;
6. Benchmark applications to empirically assess the practical benefits of the software process, designed languages, CASE tools and built-in components of the framework.
These dimensions are being and will continue to be investigated as joint research with several partners:
·
Prof. Atkinson's group at Universtät
Mannheim, Prof. Gervais and Dr. Blanc's
group at UMPC-LIP6, and Profs. Machado and Ramalho's
group at Universidade de Campina Grande,
· Prof. Frühwirth's group at Universität Ulm contributes to dimensions 3 and 5.
·
Dr. Wolf's group of Fraunhofer
Gesellschaft
· Dr. Fages and Dr. Deransart's INRIA CONTRAINTES group contribute to dimensions 2, 3 and 5.
When
the first ORC2AS based
application prototype becomes available, ORC2AS could evolve into an open-source project or a start-up
funded by venture capital. These would be two alternatives to attract to this
project the critical mass of talent to make it fulfill its long term potential:
become a widely adopted, cost-effective medium to continuously transfer new AR
research techniques to industry as seamlessly reusable software
components.
Started in 2003, the ORC2AS project aims to build a comprehensive, easy to use and reuse framework of Automated Reasoning (AR) components to dramatically lower the cost of engineering autonomous agents, Multi-Agent Simulations (MAS) or any software service which value is primarily derived from embedded AR. By AR, I mean here any form of executable inference task performed by manipulating sentences in a Knowledge Representation (KR) language. Far from being limited to monotonic deduction in classical logic, this also includes abduction, default reasoning, inheritance, belief revision, belief update, planning, constraint solving, optimization, induction and analogy with KR languages of various ontological and epistemological commitments. Different ontological commitments include propositional, first-order relational, first-order object-oriented, high-order relational or high-order object-oriented. Different epistemological commitments include binary and ternary logic under closed-world, semi-closed world or open-world assumptions, possibilistic, plausibilistic and probabilistic. By "easy to use" I mean easy to learn, test, understand, interoperate with other software and deploy on standard platforms. By "easy to reuse", I mean easy to customize, debug, and extend through assembly.
Current state of the art AR software largely fails on all these accounts, primarily because it continues to be built following obsolete software engineering processes. These processes suffer from the following flaws:
1. They are purely code-driven and tend to be ad-hoc and unsystematic.
2. They produce no abstract pre-code artifacts beyond blueprints in arcane mathematical notations that conveniently finesse over most of the key design refinement issues that are only dealt at the code level;
3. They produce virtually no testing artifact; much less do they include a systematic testing sub-process;
4. They result in monolithic software architectures that do not reflect the conceptual reuse among the AR techniques they implement, preventing cost-effective partial code reuse, customization and extension.
The resulting AR software tends to suffer from the following flaws:
1. It provides only a few, fixed subset of AR services (e.g., constraint solving over finite domains or deduction, default reasoning and inheritance) working with only one KR language (e.g., first-order relational Horn ternary logic), which together are only appropriate for a quite limited set of applications;
2. It is implemented using exotic platforms with small user communities; this translates into very limited or completely lacking application example bases, didactic literature, built-in service libraries, interoperability bridges with mainstream platforms, or visual IDE to browse reasoning explanations at variable granularity as needed for productive debugging.
Due to these flaws, reusing AR software in an industrial application which AR requirements evolve over its lifecycle is prohibitively costly: it involves understanding, wrapping and extending poorly documented and non-modularly structured source code in a hard to learn language without the help of a visual IDE. The alternative -- developing from scratch an AR functionality for the special purpose of a given application using mainstream modeling and programming languages without any inference service available as built-in -- is often even more costly. Due to this cost barrier, the most advanced, versatile and scalable AR techniques generally remain neat publications and brittle, purely academic proof of concept prototypes that never get transferred to industry. This is a severe problem for they are crucially needed to aggregate high-value services to an ever growing range of practical applications. The goal of the ORC2AS project is to break this cost barrier to turn the most conceptually advanced AR as cost-effectively and widely adopted as distributed transactional databases or computer graphics are today.
It is based on three key ideas:
1. Systematically reflect in AR software architecture, the conceptual reuse that pervades AR techniques;
2. In particular, exploit the fact that deduction, abduction, default reasoning, belief revision, belief update, planning, constraint solving and optimization (in logic, plausibilistic and probabilistic settings) have been shown to be easily reformulated as special cases of rule-based Constraint Programming (CP) [13]
3. Thus,
integrate in synergy: (a) Model-Driven Engineering (MDE) based on formal
models, metamodels, and Model Transformations (MT)
[9]
of objects, aspects, components, product lines and agents, with
(b) CP based on the assembly of component-encapsulated Object-Oriented (OO)
logical rule bases.
This integration involves research along six distinct dimensions: software process, language design, formal foundations, CASE tools, built-in components and benchmark applications. In what follows, I summarize in turn the starting point and research issues that ORC2AS aims to address along each of these dimensions. I then list the expected scientific and technological contributions of the whole framework.
Along the software process dimension, the ORC2AS project starts from the KobrA [8] model-driven, component-based, OO process to engineer product line Platform Independent Model (PIM) [10] in UML1.4. ORC2AS aims at improving and extending KobrA by answering the following open questions:
1. How
to upgrade KobrA to a new KobrA-2 version aligned
with the latest OMG standards UML2.1, OCL2.0, MOF.2.0 and SPEM 2.0? In
particular, how to change its prescribed artifact set to leverage
(a) the component-based modeling elements new to UML2.1 and (b) OCL2.0 to
create fully refined PIM, free of natural language specification ambiguities,
from which a Platform Specific Model (PSM) [10] and then code can be
generated fully automatically through MT?
2. How to extend KobrA-2 with specific UML2.1 artifacts and sub-process for GUI modeling?
3. How
to extend KobrA-2 with specific sub-processes to develop and test MT for (a)
PIM to PIM aspect weaving, (b) pre and post weaving PIM early testing by OCL2.0
constraint execution, and
(c) translation of PIM to PSM and then source code and late testing code?
4. How to extend KobrA-2 into an ORC2AS process that covers engineering tasks specific to AR applications such as MT for reuse by AR task instance reformulation (e.g., reformulating a propositional deductive entailment task instance into an equivalent Boolean variable constraint solving task instance)?
5. How to extend this ORC2AS process with a specific sub-process to model and test behaviors declaratively, as OO rule bases?
6. How to extend it with specific sub-processes to assemble (a) PIM of agents and AR applications from RC PIM following architectural patterns, and (b) MAS PIM from agent PIM following societal interaction patterns?
Extensions 1-5 above will be validated during the development of the ORC2AS built-in components described in Section 6. Extension 6 will be validated during the development of a MAS. All six extensions will be specified in SPEM2.0 using the Eclipse Process Framework (EPF) open-source Eclipse plug-in for software process authoring and automated web publication.
Along the language design dimension, the ORC2AS project starts from:
a. OMG's OO modeling and meta-modeling standards (UML2.1, OCL2.0 and MOF2.0) at the knowledge level;
b. The OO imperative programming language Java at the implementation and deployment level;
c. The KR and CP language CHRÚ (Constraint Handling Rule with disjunctive bodies) [7] -- based on first-order relational multi-head guarded rewrite and production rules -- as the formalization level mediating between the knowledge and implementation levels;
d. The KR and Logic Programming (LP) language Flora-2 [3] -- based on (syntactically) high-order OO deductive Horn rules -- also at the formalization level;
e. The INRIA ATLAS group ATL (Atlas Transformation Language) [9] MT language -- based on Ecore (a subset of MOF2.0), OCL, procedures and rewrite rules -- to develop CASE tools and compilers that support automated translation either between these three levels or inside each one of them.
ORC2AS aims at combining in synergy the complementary strengths of these languages by answering the following open questions:
1. What minimal subset of the UML2.1 meta-model is needed to visually and textually (with OCL2.0) represent all aspects of IE, OOKB, agents and MAS following the KobrA-2 and ORC2AS processes? The answer to this question -- intimately linked to the above mentioned software process dimension questions -- will be provided as two UML2.1 profiles: the first for KobrA-2 components and the second (merging the first) for ORC2AS agents and MAS;
2. Which Java component model is closer to that of UML2.1 to facilitate PIM to Java PSM MT? JavaBeans? EJB? OSGi? Eclipse Plug-ins?
3. How to extend CHRÚ into C2HRÚ (Component-based CHRÚ) by adapting to the CHRÚ context the concepts of component, interface, port, client-server contract, communication protocol and component embedding from UML2.1, in order to reflect component assembly from the knowledge level down to the formalization level?
4. How to extend CHRÚ with Flora-2 inspired (syntactically) high-order object-orientation to obtain CHORD (Constraint Handling Object-oriented Rules with Disjunctive bodies) and avoid the paradigm mismatch of a purely relational CHRÚ-based formalization level with the OO knowledge level above it and the OO implementation level below it?
5. How to integrate these orthogonal extensions of CHRÚ into C2HORD (Component-based CHORD)?
6. How will C2HORD compare with ATL as an MT language? In particular, how practically helpful are C2HORD features such rule guards, meta-rules and rule base component assembly not present in ATL?
Along the formal foundations dimension, the ORC2AS project starts from;
a. The classical first-order logic declarative semantics of monotonic CHRÚ programs [13];
b. The abstract operational semantics of monotonic and non-monotonic CHRÚ programs as a transition system between constraint store states before and after each rule application [13];
c. The concrete CHRÚ operational semantics of its currently available Prolog implementations;
d. The model-theoretic semantics of Frame Logic (FL, the OO part of Flora-2 to reuse in CHORD) as OO well-founded Herbrand models [16];
e. The model-theoretic semantics of Transaction Logic (TL, the rule-based KB update transaction and procedural part of Flora-2) as the graph of all possible KB update transaction execution paths over KB Herbrand models [11];
ORC2AS aims at providing a unified model-theoretic semantics for the structural, behavioral, declarative and operational aspects of its formalization layer by answering the following open questions:
1. How to map any CHRÚ program, monotonic or non-monotonic, into a TL formula that captures both its declarative and operational semantics?
2. Can the graph of all possible KB update transaction execution paths over OO well-founded Herbrand models provide a correct unified model-theoretic semantics for formulas in Transaction Frame Logic (TFL, the integration of FL and TL implemented in the Flora-2 engine)?
3. How to extend the CHRÚ to TL mapping into a CHORD to TFL mapping to provide formal semantics for CHORD?
4. Can the semantics of component assembly in C2HORD also be accounted by a mapping to TFL?
5. What kind of C2HORD program properties could be tested by Flora-2 queries using such TFL semantics?
6. What is the theoretically minimal constraint set which processing a C2HRÚ engine must delegate to a host platform? Is it empty meaning that C2HRÚ could be self-contained once bootstrapped on a host?
Along the CASE tool dimension, the ORC2AS project starts from:
a. The Eclipse development platform, with its plug-ins EMF (Eclipse Modeling Framework) to model languages in Ecore and applications in UML2.1 and OCL2.0, and EPF to model software processes in SPEM2.0;
b.
The ATL-DT (Atlas Transformation Language
Development Tool) [9]
Eclipse plug-in to model and execute
c. MODELOG (Model-Oriented Development with Executable Logic Object Generation) [17] an ATL rule base to automatically translate a UML2.1. model consisting of class diagrams and OCL2.0 constraints into an executable Flora-2 high-order OO deductive Horn logic rule base.
ORC2AS aims at extending this MDE CASE tool suite by investigating the following open questions:
1. How to build an ATL MT rule base that automatically translates a UML2.1. model consisting of class diagrams and OCL2.0 constraints into an executable CHORD high-order OO multi-head rewrite and production rule base in a way that maximizes reuse of MODELOG rules?
2. How to extend this UML/OCL to CHORD translation ATL rule base to cover translation of UML components into corresponding C2HORD components?
3. Can the fact that Ecore is subsumed by UML class diagrams and OCL is subsumed by ATL be exploited to reuse such UML/OCL to C2HORD translation as the core of an ATL to C2HORD translation?
4. Which Flora-2 queries could leverage MODELOG to automatically verify that a UML/OCL model is logically consistent and/or compliant to the UML profiles for the KobrA-2 and ORC2AS processes?
Along the built-in components dimension, the ORC2AS project started from scratch with no reusable software artifact. However, its conceptual starting points were the following:
a. The observation that conceptual reuse pervades published AR techniques;
b. Frühwirth et al's reformulation of deduction, constraint solving and optimization as special cases of rule-based CP in CHRÚ [6];
c. Adbennadher's reformulation of LP, tabled LP, Constraint LP (CLP), default reasoning and abduction as special cases of rule-based CP in CHRÚ [7];
d. Thielscher's Fluent Calculus axiomatization as rule-based CP in CHRÚ of belief update, planning and plausibilitic belief revision in MAS [14];
e. Costa et al's. reformulation of first-order Bayesian reasoning (i.e., probabilistic deduction, abduction and belief revision) as CLP [12];
f. Yang's reformulation of monotonic and non-monotonic inheritance as tabled LP [16];
g. Wolf's reuse of Justification Truth-Maintenance (JTM, i.e., binary logic belief revision) for adaptive CP in CHR [15];
h. The fact that rewrite rule bases, production rule bases and CLP rule bases can all be easily translated into a semantically equivalent CHRÚ rule base, making CHRÚ the most versatile, subsuming rule language;
i. The fact that a CHRÚ engine supports goal-driven, data-driven or hybrid goal and data-driven reasoning by respectively using rewrite rules, production rules or both;
j. The fact that a CHRÚ engine supports reasoning under open-world assumption (the default), closed-world assumption (via disjunctive production rules for all rule defined constraints) or partially closed-world assumption (via disjunctive production rules for selected rule defined constraints).
These results suggest the following roadmap for the development of the ORC2AS built-in components:
1. Start by developing CHROME (Constraint Handling Rule Online Model-driven Engine), a model-driven, component-based, scalable, adaptive, Java-hosted CHRÚ engine to lay at the bottom of the framework as the most widely reused AR component;
2. Extend CHROME with CHRÚ component assembly, yielding C2HROME.
3. Develop the C2HORD engine C2HORDATE (C2HORD Adaptive Transformation-based Engine) by assembling C2HROME with an ATL MT rule base to translate OO C2HORD rules into relational CHRÚ rules; for this translation, reuse ideas from the Flora-2 compiler which translates OO rules into relational tabled LP rules to implement monotonic and non-monotonic, structural and behavioral inheritance;
4. Develop a C2HORD rule base component that axiomatizes the Fluent Calculus in OO flavor;
5. Develop an ontology of AR tasks and methods -- including various forms of constraint solving, optimization, deduction, abduction, default reasoning, inheritance, belief revision, belief update, planning, and their integration -- as a generalization hierarchy of KobrA-2 component specifications;
6. Realize these specifications by developing a set of ATL MT rule base pairs, each one reformulating instances from one component in this ontology into an OO rule-based CP task; the first member of such pair translates Java objects representing the AR task and method instance input into a C2HORD initial OO constraint store and a C2HORD OO rule base to solve it; the second member of such pair translates back the C2HORD final solved OO constraint store as Java objects representing the AR task instance output ; an inference engine for any task of this ontology can then be assembled as a pipeline of three components: the input reformulation ATL MT rule base for that task, C2HORDATE, and the output reformulation ATL MT rule base for that task.
All these development tasks will be carried out following the KobrA-2 process discussed in section 2.
Each one of them will raise fundamental issues and bring detailed insights concerning the exact relationships between all these AR tasks.
Along the benchmark application dimension, the ORC2AS project will initially evaluate the CHROME Java-hosted adaptive CHRÚ engine by:
1. Comparing its performance with Prolog-hosted non-adaptive CHRÚ engines to run CHRÚ programs that respectively implement finite domain constraint solving (map coloring), default reasoning and abduction (from Abdennadher [7]) , belief revision, belief update and planning (from Thielscher [14]).
2. Comparing its performance with the DJCHR Java-hosted adaptive CHR (without disjunctions) engine on CHR programs solving finite and real domain constraints (from Wolf's benchmarks [15]);
3. Comparing its performance with the JCHR Java-hosted non-adaptive CHR (without disjunctions) engine on small CHR programs implementing classic computer science algorithms (from the JCHR's site benchmarks);
4. Using it to automatically allocate and adaptively reallocate workers to development tasks in a software house that concurrently carries out many upgrades of various priorities for several products.
As a whole, the ORC2AS framework will make major contributions to the fields of CP, rule-based systems, multi-paradigm KR and programming languages, AR and software engineering.
To CP, C2HORDATE will contribute the first Eclipse-embedded, component-based, OO rule-based, adaptive, general-purpose Java solver. It will thus be far more versatile, extensible, user-friendly and easy to integrate in practical applications than any currently available constraint solver.
C2HORDATE will also contribute to rule-based systems integration by subsuming and seamlessly integrating the KR and reasoning services provided by most currently available rewrite systems, production systems, LP systems, whether purely relational or hybrid object-relational.
To multi-paradigm KR and programming language, C2HORD will contribute the first integration of objects, constraint, rules, components and agents with a formal, logic-based, unified declarative and operational semantics (in TFL) and an Eclipse embedded execution platform (C2HORDATE).
To AR, ORC2AS will contribute the first platform able to integrate deduction, abduction, default reasoning, inheritance, belief revision, belief update, planning, constraint solving and optimization using a high-order OO component and rule based KR language (C2HORD) or any of its restrictions; it will incorporate components to perform these task under a variety of epistemological commitments: binary or ternary logic (under open-world, closed-world or partially closed world assumptions), plausibilistic or probabilistic. It will also contribute an ontology of AR tasks and methods clarifying in detail their mutual relationships.
To software engineering, ORC2AS will contribute:
1. KobrA-2, the first general-purpose software process to integrate MT-based MDE, formal methods, agile early testing and GUI modeling with component-based, aspect-oriented and OO development;
2. Its agent-oriented extension ORC2AS, the first process to contemplate all levels of MAS engineering from MAS society down to internal agent architecture patterns, KB acquisition and IE design;
3. An Eclipse-integrated CASE tool suite that provides a high-level of automation to apply these processes.
At each level of recursive component embedding, the envisioned KobrA-2 process can be outlined as follows:
1. Visually model the software requirements and structural design as UML component and class assemblies;
2. Make these requirements and structural design formally verifiable by completing them textually with C2HORD rules (either manually written or automatically generated from OCL constraints);
3. Model at a fully refined level the behaviors realized by each component either visually as UML activities or state machines, textually as C2HORD rules (either manually written or automatically generated from OCL expressions) or as the easier to read combination of the two;
4. Automatically generate a C2HORD OO rule base assembly from the integrated requirement, structural and behavioral PIM resulting from the previous steps;
5. Using C2HORDATE, apply to this assembly C2HORD meta-rule bases that encapsulate checks for conformance to the KobrA-2 and/or ORC2AS prescribed artifacts as well as overall logical consistency;
6. If any non-conformance or inconsistency is detected correct them at the PIM level, regenerate from the corrected PIM a new C2HORD OO rule base assembly and check it again; repeat these steps until the PIM is shown conformant and consistent;
7. Use the C2HORD to Java compiler component of C2HORDATE to generate the software Java source code;
8. Use a Java component framework to compile this source code and deploy the result as an executable component assembly.
The envisioned ORC2AS process follows the same outline. However, it also reuses during assembly, the fully refined IE, KB, AR, agent architecture and agent society PIM to be available as built-in in the framework. Beyond processes, ORC2AS will also contribute to formal OO software engineering by providing -- via its mapping pipeline from UML/OCL to C2HORD to TFL -- the first unified formal semantics for the large subset of the UML2.1 meta-model that covers components, classes, activities, state machines and constraints.
Subparts of ORC2AS are currently being funded as sub-projects with distinct names:
· The ROARS (Reuse-Oriented Automated Reasoning Software) project that tackles issues 2.1-2.4, 3.1-3.2, 3.4, 4.1-4.3, 5.4, 6.1, 6.5, 7.1-7.4 listed above is being funded by CAPES and DAAD for the period 2006-2007.
· The C4RBCP (Components for Rule-Based Constraint Programming) project that tackle issues 3.3, 3.5-3.6, 4.4-4.6, 6.2-6.4 listed above is being funded by FACEPE and INRIA for the period 2007-2008.
These issues constitute the core of the following theses:
· The PhD. theses of Jairson Vitorino at CIn-UFPE, Marc Meister at Universtät Ulm started in 2004
· The Master's theses of Luiz Lacerda, Fabrício Teles started in 2006 and those of Marcos Aurelio Silva, Cleyton Rodrigues, Pablo de Santana, Antônio Costa, Weslei Marino and Breno Machado at CIn-UFPE started in 2007.
It so far resulted in the following publications:
·
Jairson Vitorino, Jacques Robin and Thom Frühwirth.
Fast Prototyping of Intelligent
Components: Towards a Model-driven Compiler for Rule-Based Constraint
Programming. 8th International Conference on
·
Marc Meister, Khalil Djelloul and Jacques Robin. A Unified Semantics for Constraint Handling Rules in Transaction Logic.
9th International Conference on Logic Programming and Non-Monotonic Reasoning
(LPNMR'2007).
·
Jacques Robin and Jairson
Vitorino. ORCAS:
Towards a CHR-Based Model-Driven Framework of Reusable Reasoning Components.
20th Workshop on Logic Programming (WLP'2006),
·
Luis Menezes, Jairson Vitorino and Marcos
Aurelio. A High Performance CHRÚ Execution Engine. 2nd
Workshop on Constraint Handling Rules (CHR'2005).
[1]
http://www.planetmde.org/
[2]
http://www.omg.org/technology/documents/formal/ocl.htm
[3]
http://flora.sourceforge.net/
[4]
http://www.aosd.net/
[5]
http://www.omg.org/
[6]
http://www.cs.kuleuven.ac.be/~dtai/projects/CHR/
[7] Abdennadher, S. Rule-based Constraint Programming: Theory
and Practice, Habilitation, Institut für Informatik, Ludwig-Maximilians-Universität München,
2001.
[8] Atkinson,
C., Bayer, J., Bunse, C., Kamsties,
E., Laitenberger, O., Laqua,
R., Muthig, D., Paech, B., Wust, J. and Zettel, J.
Component-based Product Line Engineering with UML. Addison-Wesley, 2002.
[9] ATLAS
Group: The ATL User Manual. http://www.eclipse.org/gmt/atl/doc/.
[10] Blanc, X.
MDA en Action: Ingénierie logicielle
guidée par les modèles. Eyrolles. 2005.
[11] Bonner, A.
and Kifer, M. Transaction Logic programming (or, a
logic of procedural and declarative knowledge). Technical Report. CSRI-323,
Computer Systems Research Institute,
[12] Costa, V.,
Page, D., Qazi, M. and Cussens,
J. Constraint Logic Programming for Probabilistic Knowledge. In
Proceedings of the 19th Conference on Uncertainty in Artificial Intelligence
(UAI’03),
[13] Frühwirth, T. and Abdennadher, S.
Essentials of Constraint Programming. Springer. 2003.
[14] Thielscher, M. http://www.fluxagent.org/2002.
[15] Wolf, A., Gruenhagen, T., Geske, U. 2000.
On Incremental Adaptation of CHR Derivations. In Journal of Applied Artificial
Intelligence 14(4). Special Issue on Constraint Handling Rules.
[16] Yang, G. A
Model Theory for Nonmonotonic Multiple Value and Code
Inheritance in Object-Oriented Knowledge Bases. PhD. Thesis,
Computer Science Department, Stony Brook University of
[17] Ramalho, F. MODELOG:
Model-Oriented Development with Executable Logic Object Generation. PhD.
Thesis. Centro de Informática, Universidade
Federal de Pernambuco (CIn-UFPE),
© Jacques Robin 2007. Last Updated: