Showing posts with label #artificialintelligence #blog #computer #education #python #pythonwithartificial #readmyblog #goahead. Show all posts
Showing posts with label #artificialintelligence #blog #computer #education #python #pythonwithartificial #readmyblog #goahead. Show all posts

Saturday, 18 February 2023

ARTIFICIAL-INTELLIGENCE WITH PYTHON- SMALL -NOTES

 

Artificial Intelligence with Python:-

Agent- Entity that perceives its environment and acts upon that environment.

Initial State- The state in which the agent begins.

Actions- Actions(s) returns the set of actions that can be executed in state s

Transition Model- A description of what state results from performing any applicable action in any state.

RESULT(s, a) returns the state resulting from performing action a in state s


State Space- The set of all states reachable from the initial state by any sequence of actions.

Goal test- A way to determine whether a given state is a goal state.

Path Cost- Numerical cost associated with a given path.

Search Problems-

  • initial state
  • actions
  • transition model
  • goal test
  • path cost function

Solution -

A sequence of actions that leads from the initial state to a goal state.

Optimal-Solution- A solution that has the lowest path cost among all solutions.

Node- A data structure that keeps track of

  • a state
  • a parent (node that generated this node)
  • an action(action applied to parent to get code)
  • a path cost(from initial state to node)

Approach-

  • Start with a frontier that contains the initial state.

  • Repeat:

         * if the frontier is empty, then no solution.
    
  • Remove a node from the frontier.

  • If node contains goal state, return the solution.

  • Expand node, add resulting nodes to the frontier.

Find a node from A to E.

Revised Approach:-

  • Start with a frontier that contains the initial state.

  • Start with an empty explored set.

  • Repeat:

              if the frontier is empty, then no solution.
    
  • Remove a node from the frontier

  • If node contains goal state, return the solution.

  • Add the node to the explored set.

  • Expand node, add resulting nodes to the frontier if they aren't already in the frontier or the explored set.

  • One of the simplest data structures for adding and removing elements is called Stack- last-in-first-out data type.

  • So when we treat the frontier like a stack, a last in, first out data structure, that's the result we get.

Depth-First-Search-

Is the search algorithm that always expands the deepest node in the frontier.

Breath-First-Search-

search algorithm that always expands the shallowest node in the frontier.

It means that instead of using a stack, which depth-first-search, or DFS, used where the most recent item added to the frontier is the one we'll explore next, in breath-first-search, or BFS, will instead use a queue— First-in-first-out data type.

Uninformed Search -

Search strategy that uses no problem- specific knowledge.

Informed Search-

Search strategy that uses problem-specific knowledge to find solutions more efficiently.

Greedy best-first search-

Search algorithm that expands the node that is closes to the goal, as estimated by a heuristic function h(n).


A* Search -

search algorithm that expands node with lowest value of g(n) + h(n)

g(n) = cost to reach node

h(n) = estimated cost to goal

optimal if

— h(n) is admissible (never overestimates the true cost), and

— h(n) is consistent (for every node n and successor n’ with step cost c, h(n)_<_h(n’)+c)

Minimax -

  • Max(X) aims to maximize score.
  • Min (O) aims to minimize score.

Minimax-

  • Max(X) aims to maximize score.
  • Min (O) aims to minimize score.

Game-

  • So: initial state
  • PLAYER(s): returns which player to move in state s
  • Actions(s): returns legal moves in state s
  • RESULT(s, a): returns state after action a taken in state s
  • TERMINAL(s): check if state s is a terminal state
  • UTILITY(s): Final numerical value for terminal state s

Minimax-

  • Give a state s:
  • Max PICKS action a in Actions(s) that produces highest value of Min-Value(RESULT(s, a))
  • MIN picks action a in ACTIONS(s) that produces smallest value of MAX-VALUE(RESULT(s, a))

DEPTH-LIMITED MINIMAX-

ALPHA-BETA PRUNING-

EVALUATION-FUNCTION—

function that estimates the expected utility of the game from a given state.

BLUE RAY CD’S IS BACK?

Introduction Remember the days when CDs were the go-to medium for storing data and music? Well, it seems like those days might be back, alth...