Lab 3 - Testing and Functions
Due Date and Links
-
Lab due on your scheduled lab day
-
Lab accepted for full credit until Monday, February 2, 11:59 pm Eastern
-
Direct autograder link https://autograder.io/web/project/3783
Background
One common application of AI is in content moderation. Social media platforms such as Instagram and Facebook use AI to analyze posts and captions to ensure there is no inappropriate or harmful content in them. While they do so using complex machine learning algorithms, a simpler version of this can be implemented using the power of function composition. In this lab, you will build a Sentiment Analysis Bot, which is a program that reads text and determines if it gives Immaculate, Good, Neutral, Bad, or Disastrous vibes. You will gain practice with function implementation and use, as well as get your first exposure to testing.
Your program will take as input a short phrase representing a social media post caption. It will split the phrase into a list of words and then “clean” each word by stripping any leading/trailing whitespace, removing punctuation, and converting the word to lowercase. Then, it will iterate through each word in the list and give it a sentiment score. If the word is one of “good”, “great”, “love”, “excellent”, or “fantastic”, it receives a sentiment score of 1. If it’s one of “bad”, “hate”, “terrible”, “awful”, “unfair”, or “horrible”, it receives a sentiment score of -1. Otherwise, it receives a sentiment score of 0. Finally, the program will sum the sentiment scores of all the words to get the overall sentiment score of the phrase. It will then return a “label” corresponding to the sentiment score.
For example, to analyze the phrase “Hello! Have a lovely day with fantastic company!”, we first split the phrase into the following list:
["Hello!", "Have", "a", "lovely", "day", "with", "fantastic", "company!"]
We then clean each word, ending with the following list:
["hello", "have", "a", "lovely", "day", "with", "fantastic", "company"]
Note We “clean” the words to ensure you are able to accurately check the sentiment of a word. Python treats “love”, “Love”, and “love,” as completely different strings, making it difficult to accurately compare. Cleaning the word removes this variability.
We then calculate the sentiment score of each word:
hello: 0 have: 0 a: 0 lovely: 1 day: 0 with: 0 fantastic: 1 company: 0
This allows us to calculate the total sentiment score of the sentence: 2. We output the label “Good”.
Important: This lab uses a Test-Driven Development (TDD) workflow. You will write the tests before you write the solution logic.
Note: In EECS 183, we grade your test cases (in short) by running them on a copy of our correct solution code and a copy of our code with mistakes and saving the outputs of these two programs. We then “diff” the outputs. If your test cases created a difference in output between the correct and incorrect code, you “caught” a bug. Because of this, when you call functions in your test cases, you must print the function’s output!. Otherwise, there is no way for us to detect whether or not you have caught a bug. For more detail, review Tutorial 6.
Starter Files
You can download the starter files using this link.
The starter files are:
vibe_check.py: This is where your function implementations live.vibe_check_test.py: This is where your test cases live.
Note: You will always run
vibe_check_test.py. It imports the functions from your logic file.
How to Submit
IMPORTANT: For all labs in EECS 183, to receive a grade, every student must individually submit their work. Late submissions for Labs will not be accepted for credit.
- Once you receive a grade of 10 out of 10 points from the autograder you have full credit for this lab assignment.
Function 1: clean_word()
Before we can analyze a word, we need to clean it - more formally, standardize or normalize it. As explained above, Python sees "Good!" and "good" as totally different strings. This functions cleans the words up by stripping any leading or trailing whitespace, removing any leading or trailing punctuation marks (commas, periods, question marks, and exclamation marks), and converting the entire word to lowercase. It then returns this word.
Keeping this functionality in mind, open vibe_check_test.py. Find test_clean_word() and write test cases to test clean_word(). We have provided a basic test. You will be responsible for brainstorming other test cases for your code.
HINT: There are three basic requirements for this function: whitespace removal, punctuation removal, and switching to lowercase. Ensure you write test cases that cover all three of these requirements.
Run the test file. The strings after Expected and Actual should match for each line.
Since we have given you the solution code for this function, you are only responsible for testing it. The solution code is in vibe_check.py (clean_word()). You will see the string methods used in clean_word() in a future lecture, but as a quick preview:
.strip()removes any leading or trailing whitespace around the word.lower()converts the entire word to lowercase.strip('.,!?')removes any trailing or leading periods, question marks, exclamation marks, and/or question marks. Note that it does not remove these marks from within the word.
Function 2: get_word_score()
This function assigns a numerical sentiment score to the word it takes as input. Scores are assigned as follows:
"good", "great", "love", "excellent", "fantastic": receives a score of 1"bad", "hate", "terrible", "awful", "unfair", "horrible": receives a score of -1- All other words receive a score of 0.
Keeping these requirements in mind, go to vibe_check_test.py. Find test_get_word_score() and write test cases to test get_word_score(). We have provided a basic test. You will be responsible for brainstorming other test cases for your code.
Run the test file. You should see your prints, but the “Actual: “ will not have anything following it. This is normal: you haven’t implemented your function yet!
Now, open vibe_check.py and implement get_word_score() according to the RME.
HINT: words have to be an exact match to receive a sentiment score. For example, “lovely” should receive a sentiment score of 0. Even though it has the word “love” in it, “lovely” is not a direct match for “love”.
Run the test file again. If your logic is correct, the strings after Expected and Actual should match for each line.
Function 3: calculate_vibe()
get_word_score() enables us to score one word. We now want to assign a score to an entire phrase (list of words) by aggregating the sentiment scores of all the words within the phrase. If the list passed into calculate_vibe() is empty, return 0.
Keeping these requirements in mind, go to vibe_check_test.py. Find test_calculate_vibe() and write test cases to test calculate_vibe(). We have provided a basic test. You will be responsible for brainstorming other test cases for your code.
Run the test file. You should see your prints, but the “Actual: “ will not have anything following it. This is normal: you haven’t implemented your function yet!
Now, open vibe_check.py and implement calculate_vibe() according to the RME. You should call get_word_score() in this function.
Run the test file again. If your logic is correct, the strings after Expected and Actual should match for each line.
Function 4: analyze_post()
We will use function composition in analyze_post() to connect everything together. This function will take in a caption as a string and output a label for the caption’s sentiment. The labels are assigned as follows:
- A score of greater than 3 receives a label of
"Immaculate" - A score of between 1 and 3 inclusive receives a label of
"Good" - A score of 0 receives a label of
"Neutral" - A score of between -3 and -1 inclusive receives a label of
"Bad" - A score of less than -3 receives a label of
"Disastrous"
Keeping these requirements in mind, go to vibe_check_test.py. Find test_analyze_post() and write test cases to test analyze_post(). We have provided a basic test. You will be responsible for brainstorming other test cases for your code.
Run the test file. You should see your prints, but the “Actual: “ will not have anything following it. This is normal: you haven’t implemented your function yet!
Now, open vibe_check.py and implement analyze_post() according to the RME. You will need to perform the following steps:
- Split the input text (representing the caption) into a list of words. HINT: You can can do this with
text.split(). This line of code splitstextby the whitespace and adds each word to an array. For example, splitting the string"Hello my name is Krithika, YAY!"yields["Hello", "my", "name", "is", "Krithika,", "YAY!"]. - Iterate through the list of words and clean each word. At the end, you should have a new list of the cleaned words. HINT: start with a new variable that is the empty list,
[]. Call theappendmethod on that list inside a for-loop body to add the cleaned words to the new list. - Call
calculate_vibe()to get the sentiment score of the whole sentence. - Use this sentiment score to return the appropriate label, as described above and in the RME.
Run the test file again. If your logic is correct, the strings after Expected and Actual should match for each line.
How to Submit
- When you’re ready, submit to the autograder. You will submit your
vibe_check.pyandvibe_check_test.pyfiles.
IMPORTANT: For all labs in EECS 183, to receive a grade, every student must individually submit the Lab Submission. Late submissions for labs will not be accepted for credit. For this lab, you will receive ten submissions per day with feedback.
- Once you receive a grade of 10 of 10 points from the autograder you will have received full credit for this lab.
Copyright and Academic Integrity
© 2026 .
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
All materials provided for this course, including but not limited to labs, projects, notes, and starter code, are the copyrighted intellectual property of the author(s) listed in the copyright notice above. While these materials are licensed for public non-commercial use, this license does not grant you permission to post or republish your solutions to these assignments.
It is strictly prohibited to post, share, or otherwise distribute solution code (in part or in full) in any manner or on any platform, public or private, where it may be accessed by anyone other than the course staff. This includes, but is not limited to:
- Public-facing websites (like a personal blog or public GitHub repo).
- Solution-sharing websites (like Chegg or Course Hero).
- Private collections, archives, or repositories (such as student group “test banks,” club wikis, or shared Google Drives).
- Group messaging platforms (like Discord or Slack).
To do so is a violation of the university’s academic integrity policy and will be treated as such.
Asking questions by posting small code snippets to our private course discussion forum is not a violation of this policy.