Reasoning About Beliefs, Observability, and Information Exchange in Teamwork - PowerPoint PPT Presentation

About This Presentation
Title:

Reasoning About Beliefs, Observability, and Information Exchange in Teamwork

Description:

Reasoning About Beliefs, Observability, and Information Exchange in ... Current formalism does not allow for nested beliefs. Bel(A1,Bel(A2,lightOn(room5) ... – PowerPoint PPT presentation

Number of Views:46
Avg rating:3.0/5.0
Slides: 12
Provided by: thomasr1
Category:

less

Transcript and Presenter's Notes

Title: Reasoning About Beliefs, Observability, and Information Exchange in Teamwork


1
Reasoning About Beliefs, Observability, and
Information Exchange in Teamwork
Thomas R. Ioerger Department of Computer
Science Texas AM University
2
The Need for Reasoning about Beliefs of Others
in MAS
The traditional interpretation of BDI Beliefs,
Desires, and Intentions of self What about
beliefs of others? - important for agent
interactions
Decision-making depends on beliefs Does the
other driver see me? Does the other driver
seem to be in a hurry? Did the other
driver see who arrived at the intersection
first? Does the other driver see my turn
signal? Does other driver allow gap to open
for changing lanes?
3
The Need for Reasoning about Beliefs of Others
in Teams
  • Proactive Information Exchange
  • automatically share info with others
  • makes teamwork more efficient
  • infer relevance from pre-conditions
  • of others goals in team plan
  • should try to avoid redundancy

A
B
Ideal conditions to send message Bel(A,I) ?
Bel(A,?Bel(B,I)) ?Bel(A,Goal(B,G)) ?
Precond(I,G) ??Bel(B,I) ???Done(B,G)
?Bel(B,I) ????Done(B,G)
team-plan catch-thief (do B (turn-on
light-switch)) (do A (enter-room)) (do A
(jump-on thief))
should B tell A the light is now on???
4
Observability
  • Obs(a,f,y) - agent a will observe f under
    conditions y (i.e. the context)
  • example ?x Obs(A1,broken(x),holding(A1,x))
  • Similarity to VSK logic (Wooldridge)
  • V(f)accessible, S(f)perceives, K(f)knows
  • Obs(a,f,y) ? y?Sa(f)
  • Assumption agents believe what they see Sa(f)
    ? Ka(f)
  • Small differences
  • we use Belief instead of Knowledge Sa(f) ?
    Ba(f)
  • B is weak-S5 modal logic (KD45, without T axiom,
    B(f) f)
  • only believe whether f is true (or false)
  • Obs(a,f,y) ? y?(f ? Sa(f)) ? (?f ? Sa(?f))

5
Belief Database
tuples ltA,F,Vgt A?agents F?facts
(propositions) V?valuations valuations
true,false,unkown,whether unknown ?
?true??false whether ? true?false Update
Algorithm
  • ltA1,in(gold,room1),truegt
  • ltA1,lightOn(room1),falsegt
  • ltA1,in(A1,room1),truegt
  • ltA1,in(A1,room2),falsegt
  • ltA1,in(A2,room1),falsegt
  • ltA1,in(A2,room2),truegt
  • ltA2,in(gold,room1),unknowngt
  • ltA2,lightOn(room1),falsegt
  • ltA2,in(A1,room1),truegt
  • ltA2,in(A1,room2),falsegt
  • ltA2,in(A2,room1),whethergt
  • ltA2,in(A2,room2),whethergt
  • ...
  • ...

Di1Update(Di,P,J)
justification rules J
perceptions P
Di
Di
Di1
Update
6
Justifications for Belief Updates
  • Justification type Representation
    Priority Notes
  • direct observation (sense s) ? y ? f 6 self
    only
  • observability (obs a f y) 5 obs of others
  • effects of actions (effect x q) 4 if aware of
    x
  • inferences (infer f y) 3 y ? f
  • memory (persist f) 2 f true OR false
  • assumptions (default f) 1

7
Belief Update Algorithm
  • updating beliefs is not so simple...
  • Prioritized logic programs
  • Horn clauses annotated with strengths
  • semantics based models in which facts are
    supported by strongest rule
  • implementation
  • (assuming rules are not cyclic...)
  • create DAG of propositions
  • topoligically sort P1..Pn
  • determine true values in order
  • Pi depends at most on truth
  • values of P1..Pi-1

A?B?C (1) G??C (2) C??D?E (1) A?F??E (3)
E
C D
A B F G
8
PIEX Algorithm
  • PIEX Proactive Information EXchange
  • given belief database D, perceptions P, and
  • justification rules J
  • D?BeliefUpdateAlg(D,P,J)
  • for each agent Ai ? Agents and G ? Goals(Ai)
  • for each C ? PreConditions(G)
  • if C is a positive literal, let v?true
  • if C is a negative literal, let v?false
  • if ltAi,C,not(v)gt ? D or ltAi,C,unknowngt ?
    D
  • Tell(Ai,C,v)
  • Update(D,ltAi,C,vgt)

9
Experiment Wumpus Roundup!
10
Issues
  • Current formalism does not allow for nested
    beliefs
  • Bel(A1,Bel(A2,lightOn(room5)))
  • Bel(A1,Bel(A2,Bel(A1,lightOn(room5))))
  • see Isozaki and Katsuno (1996)
  • We are working on an representation of modal
    logic in Prolog
  • allows nested beliefs and rules
  • backward-chaining rather than forward (e.g.
    UpdateAlg)
  • of course, not complete
  • Better reasoning about knowledge of actions
  • assert pre-conds before effects? uncertainty of
    do-er/time?

11
Conclusions
  • 1. Modeling beliefs of others is important for
    multi-agent interactions
  • 2. Observability is a key to modeling others
    beliefs
  • 3. Must be integrated properly with other
    justifications, such as inference, persistence...
  • 4. Different strengths can be managed using
    prioritized inference (Prioritized Logic
    Programs)
  • 5. Proactive information exchange can improve
    performance of teams
  • 6. Message traffic can be intelligently reduced
    by reasoning about beliefs
Write a Comment
User Comments (0)
About PowerShow.com