(unknown charset) [SystemSafety] CFP for AAA (Argument for Agreement and Assurance)

From: (unknown charset) 田口研治 < >
Date: Mon, 27 Jul 2015 08:15:04 +0000


%% AAA2015: Second Call for Papers


%% 2nd International Workshop on

%% Argument for Agreement and Assurance


%% Nov 17, 2015 (Tentative)

%% Keio University, Kanagawa, Japan


%% http://cse.t.u-tokyo.ac.jp/kido/AAA2015/


[We apologize if you receive multiple copies.]


Submissions are invited for the 2nd International Workshop on Argument

for Agreement and Assurance (AAA 2015).

It will be held as a one-day workshop in November 16-18, 2015

at Keio University, Kanagawa, Japan,in association with the Japanese

Society for Artificial Intelligence international symposia on AI

(JSAI-isAI) 2015.

Argument has now become an interdisciplinary research topic receiving

much attention from formal logic, informal logic and artificial

intelligence. It aims at processing, analyzing and evaluating various

aspects of human argument appeared in television, newspapers, WWW,etc.

and also artificial arguments constructed from structured

knowledge with logical language and rules of inference. Results of the

study are widely applicable to various domains such as safety,political,

medical and legal domains.

In particular, safety engineering appreciates Toulmin's argument

model starting from his critical opinion on formal logic. There is a

growing interest in the use of an evidence-based argument often called

a safety case, assurance case or dependability case. Nowadays, many

safety-related standards/guidelines mandate the submission of safety

cases to certification bodies. The argument is widely used in the

system development such as being used for stakeholders to reach

agreement on some critical issues and for system manufacturers to

improve accountability to their customers. The international workshop

on argument for agreement and assurance contributes to deepen mutual

understanding among researchers/practitioners in formal and informal

logic, artificial intelligence, and safety engineering working on

agreement and assurance through argument.


Topics of interest include but are not limited to the following:

      e.g., frameworks, proof-theories, semantics and complexity.

      eristic, and information-seeking dialogue systems.

      e.g., frameworks, proof-theories, semantics and complexity.

      formal semantics, evaluation on their effectiveness.

      for safety cases, assurance cases, dependability cases, etc.

      agreement technologies, systems assurance, safety engineering,

      systems resilience, practical reasoning, belief revision,

      multi-agent systems, learning, and semantic web.

      safety case construction system, argument-based stakeholders'

      agreement, argument-based accountability achievement,

      argument-based open systems dependability,

      argument-based verification and validation, etc.

Important Dates

Submission Instructions

We welcome and encourage the submission of high quality, original

papers, which are not simultaneously submitted for publication

elsewhere. Papers should be written in English, formatted according to

the Springer Verlag LNCS style in a pdf form , which can be obtained

from Springer Online , and not exceed 14 pages including figures,

references, etc. If you use a word file, please follow the instruction

of the format, and then convert it into a pdf form. Here is the

submission page.

We will have "Tools and Demo" session. Contributions of this session

may submit papers in at most 4 pages. For accepted papers, the author

is required to give a demonstration in the session.

All submissions will be rigorously peer reviewed with double blind. If

a paper is accepted, at least one author of the paper must register

the workshop and present it. Since post-proceedings for LNAI are

proposal based, post-proceedings of AAA 2015 may be published but not

guaranteed yet currently, although our proposals have been accepted

every recent year.

Invited Speakers

Phan Minh Dung, Asian Institute of Technology

Ewen Denney, SGT/NASA Ames Research Center

Programme Committee

    Katarzyna Budzynska, Polish Academy of Sciences & University of Dundee

    Martin Caminada, University of Aberdeen

    Federico Cerruti, The University of Aberdeen

    Juergen Dix Clausthal, University of Technology

    Ewen Denney, SGT/NASA Ames Research Center

    Phan Minh Dung, Asian Institute of Technology

    C. Michael Holloway, NASA Langley Research Center

    Antonis Kakas, University of Cyprus

    Tim Kelly, University of York

    Hiroyuki Kido, The University of Tokyo

    Yoshiki Kinoshita, Kanagawa University

    John Knight, University of Virginia

    Yutaka Matsuno, Nihon University

    John Rushby, SRI International

    Chiaki Sakama, Wakayama University

    Ken Satoh, National Institute of Informatics and Sokendai

    Guillermo Simari, Universidad Nacional del Sur in Bahia Blanca

    Kenji Taguchi, National Institute of Advanced Industrial Science and Technology

    Kazuko Takahashi, Kwansei Gakuin University

    Toshinori Takai, Nara Institute of Science and Technology

    Makoto Takeyama, Kanagawa University

    Paolo Torroni, University of Bologna

    Charles Weinstock, Software Engineering Institute

    Stefan Woltran, TU Wien

    Shuichiro Yamamoto, Nagoya University

Organizing Committee

    Kazuko Takahashi, Kwansei Gakuin University

    Kenji Taguchi, National Institute of Advanced Industrial Science and Technology

    Tim Kelly, University of York

    Hiroyuki Kido, The University of Tokyo

Kenji Taguchi Ph.D (Computer Science)

Invited Senior Researcher

Co-chair of OMG SysA PTF

Software Analytics Research Group

Information Technology Research Institute

National Institute of Advanced Industrial Science and Technology (AIST)

Nakoji 3-11-46, Amagasaki, Hyogo 661-0974, Japan

Tel: +81-6-6494-8051 Fax: +81-6-6494-8073

URL: http://staff.aist.go.jp/kenji.taguchi/index.html

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Mon Jul 27 2015 - 10:15:22 CEST

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:07 CEST