fp-elevators
Reach AI
Overview
So far you’ve been deep in the trenches of implementing a functional game that interacts with a user. Now, it’s time to pivot and think as the user of this game. Specifically, what kind of decisions should you make as a player of this game to get the best performance from the elevators?
This section may seem fairly open-ended, and you may feel slightly confused about what your AI is responsible for doing. This is by design. Often times, real-world projects are vague about a lot of implementation details, so programmers need to think about the problem and decide on what’s best for the project. Think! Discuss with your groupmates! Think some more! Try things! Fail! Try again! Have fun!
First Steps
Play through the game, and for each move you make, ask yourself why you decided to do the thing you did. What kind of rules come to mind?
Understanding the Metrics
What does it mean to be the “best” elevator? To answer this question, we will resort to defining a SatisfactionIndex
. Make sure to read through and understand what each of these values represent. If you had to play the game trying to get the best possible values for these, what would you do?
The AI Move Framework
Your AI will be implemented in AI.cpp
.
Among the functionality we’ve built for you, we’ve created the files AI.cpp
and AI.h
, which are provided with the same information that is provided to the Player via the terminal and should create valid Move
s. For the reach, you will be responsible for implementing the functions in AI.cpp
in such a way that gets good performance. More concretely, we will evaluate 2 things:
- Validity: Your AI should only produce valid moves (from
getAIMoveString
) and valid pickup lists (fromgetAIPickupList
). - Performance: Your AI should try and optimize the metrics in the
SatisfactionIndex
on certain game files. It is up to you as to what benchmark you’d like to optimize!
Implementation Considerations
You are encouraged to devise helper functions to aid you in writing the functions in AI.cpp
; you can add these directly to AI.cpp
when you submit. You are also permitted to include any libraries present in the C++11 standard template library, provided that you don’t have to modify your build process to get them to work. Do not edit AI.h. We should be able to run your code immediately when we make a project in XCode or Visual Studio.
Testing your AI
As always, it is imperative to test your code well. Elevators reach is different from the other 183 projects in that your grade is determined by performance for AI.cpp
. So, now your tests are not only concerned with correctness, but also with benchmarking the power of your AI. You should be constantly testing whether your AI performs well on different inputs.
There are two ways to test your AI:
- When running your implementation for the Core, select either Load saved game (once you have completed the function
Game::playGame
) or Start new game. Then enter 2 for Watch AI play. As you play the game, when you are prompted to Enter move: or Choices: for a pickup list, just press enter. Your AI should provide the move or pickup list. - Submit your AI to the autograder.
Input Files
A description of input files and how to use them are found on this page: Input Files
Submitting and Grading
You will submit your files to the autograder for the Reach. You will submit AI.cpp
to the autograder.
Your AI, the functions in AI.cpp
, will be graded by the performance of your AI with a variety of input events:
- We will compile your AI.cpp file with the staff implementation for the Core. Your AI will not be tested with your solution files for the Core.
- We will execute your AI repeatedly with different game input files. Game input files will be a long list of timestamped events, similar to what is shown in
game.in
andsave.in
found in the starter files. - The input files will be of differing difficulty, for example by varying the number of events and starting
angerLevel
s. - The input files will have different patterns. Examples are:
- A larger potion of
Person
s entering the bottom/lower floors with a higher target floor. These tests are labeled “Morning” on the autograder. - Many
Person
s with a lower target floor. These tests are labeled “Evening” on the autograder. - Or a random mix. These tests are labeled “Random” on the autograder.
- A larger potion of
- The performance of your AI will be measured using a subset of the metrics in
SatisfactionIndex
, recorded at the end of each game. The final values will be compared to the passing thresholds determined by staff. The autograder will show you which test cases you pass and which you do not, along with yourSatisfactionIndex
metrics, for each test.
Showcase
At the Showcase, you will be given space on a table to display your project using a poster and a live demo on a laptop of your Reach. (Note that there will not be power available at the tables at the showcase.) Your team will give presentations to the EECS 183 staff about your Reach.
Your poster and presentation must answer the following questions:
-
Strategy: What is your AI’s strategy and why did you expect it to do well at the Elevators game?
Example: Our AI brings elevators to the floors that have the angriest people. This will prevent people from exploding, which leads to a higher score.
-
Implementation: How did you translate your strategy into an algorithm?
Example: We loop over each of the floors and find the sum of the anger on each floor. If a person will explode before the elevator can arrive, we do not factor in their anger. We always pick up every person going in a particular direction on a floor.
-
Evaluation: What are the strengths and weaknesses of the strategy? Which types of games did it do the best on and why?
Example: This strategy worked well on games where there were an uneven number of people on each floor, because it prioritized floors where we could help more people. This strategy did not do as well on games where most people had the same anger levels, because it didn’t know that servicing nearby floors would lead to better results than far away floors when given the chance.
Artwork and diagrams also will make for a better poster and make it easier to explain your work during the presentation! Have fun with it! Posters mounted on posterboards look the best, but any kind of paper display that has the above information will receive full credit.
Showcase grading rubric
You will receive 10 points at the showcase:
- 2 points for attendance
- 4 points for your poster meeting the above requirements and there being a laptop with a demo of your AI
- 4 points for participating in the presentation