🧪 Testing Framework for Code-Formatter-Advisor

elisassa

Elisaassa

Posted on November 9, 2024

🧪 Testing Framework for Code-Formatter-Advisor

🎯 Overview

In this blog, I’ll share my experience setting up automated testing for the Code-Formatter-Advisor project. I’ll cover the tools I used, how I set up the environment, handled mock responses, and what I learned along the way.

🔧 Chosen Tools

I used pytest as my testing framework, along with Python’s unittest.mock module to create mock responses for my LLM API. Here’s why:

🛠️ Setting Up the Testing Environment

Here’s how I set up the testing framework in my project:

  1. Installing Dependencies:
   pip install pytest
   pip install python-dotenv
Enter fullscreen mode Exit fullscreen mode

These are used for running tests and managing environment variables.

  1. Creating the Test File:
    I created test_example.py in my project’s root directory. It contains unit tests for various functions in my project.

  2. Writing the Tests:
    I used pytest and unittest.mock to create mock versions of LLM API responses and simulate different scenarios.

🌀 Mocking the LLM API

Mocking LLM responses was crucial for reproducible tests. I created a mock response using MagicMock from unittest.mock, which allowed me to:

  • Return a consistent response for each request.
  • Avoid relying on live API calls.

Example of mocking:

@patch("analyzer.send_chat_completion_request")
@patch("analyzer.read_code_file")
def test_analyze_code_with_mocked_llm(mock_read_code_file, mock_send_chat_completion_request, tmp_path):
    mock_read_code_file.return_value = SAMPLE_CODE_CONTENT
    mock_llm_response = MagicMock()
    mock_llm_response.choices[0].message.content = "Mocked formatting suggestions."
    mock_send_chat_completion_request.return_value = mock_llm_response
    # ...
Enter fullscreen mode Exit fullscreen mode

💡 Lessons Learned

Aha! Moments

  • Mocking Simplifies Testing: Mocking the LLM allowed me to avoid issues like rate limits and unstable responses, making tests consistent.
  • Refactoring Improves Testability: I realized breaking large functions into smaller components makes them much easier to test.

Challenges

Initially, I faced issues with Git branches, specifically merging my testing branch into main. After some troubleshooting, I learned about interactive rebase and fast-forward merging, which cleaned up the commit history.

🐞 Interesting Bugs Found

  • Missing Files: I found that when certain files were missing, my code wasn’t handling the errors gracefully. This was fixed by adding appropriate error logging.
  • API Errors: Testing revealed that not all API errors were being caught, which could lead to crashes.

🎓 Takeaways

  • Testing Matters: I’d never done thorough testing before, but now I see its value in creating reliable software.
  • Smaller Is Better: Breaking down functions into smaller units isn’t just good practice for readability, but also for testing.

I’ll definitely continue writing tests for future projects to ensure better quality code.

📘 Conclusion

Setting up testing for my project taught me a lot about how important automated tests are for maintaining code quality. Mocking tools like unittest.mock are powerful allies when working with external APIs.

If you’re new to testing, my advice is: start small, be consistent, and keep learning. Writing good tests takes practice, but it’s absolutely worth it!

Let me know your thoughts or questions below! 😊

💖 💪 🙅 🚩
elisassa
Elisaassa

Posted on November 9, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related