๐ Building an AI-Powered Call Intelligence System: A Developer's Epic Journey
Biswajit Patra
Posted on November 4, 2024
Welcome aboard, brave code warriors! Today, we're embarking on an epic quest to build, test, and evolve a call intelligence system that turns boring audio files into goldmines of insights. Grab your favourite caffeinated beverage, fire up Google Colab, and let's dive into this adventure!
๐ Table of Contents
- The Quest Begins: Basic Setup
- Building Our Magic Powers
- Testing Our Creation
- Leveling Up: Advanced Features
- Battle-Testing in Production
๐ฐ The Quest Begins: Basic Setup
First, let's gather our magical ingredients:
!pip install openai # Your trusty spell book
๐ Securing Your Magic Keys
def setup_api_key():
"""Your secret API key vault ๐"""
api_key = input("Enter your OpenAI API key (we promise to keep it safe!): ")
os.environ['OPENAI_API_KEY'] = api_key
return "API key secured! ๐"
settings = setup_api_key()
๐งโโ๏ธ Building Our Magic Powers
๐๏ธ The Speech Whisperer
class AudioWizard:
"""๐๏ธ Turning speech into text since 2024"""
def __init__(self, api_key):
self.client = OpenAI(api_key=api_key)
print("AudioWizard initialized and ready for action! ๐")
@retry(tries=3, delay=2) # Magic shield against API hiccups
async def transcribe_audio(self, audio_file_path):
try:
print("๐ง Listening to your audio...")
with open(audio_file_path, "rb") as audio_file:
transcription = await self.client.audio.transcriptions.create(
model="whisper-1",
file=audio_file,
response_format="verbose_json"
)
print("โจ Translation complete!")
return {
"success": True,
"text": transcription.text,
"confidence": transcription.confidence
}
except Exception as e:
print(f"๐จ Oh no! A wild error appeared: {str(e)}")
return {"success": False, "error": str(e)}
๐ญ The Script Enchanter
class ScriptEnchanter:
"""๐ฌ Making your transcripts ready for Broadway"""
def __init__(self, api_key):
self.client = OpenAI(api_key=api_key)
async def format_script(self, text):
try:
response = await self.client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": """
๐ญ You are a master screenplay formatter who:
- Names speakers like a casting director
- Adds emotions and actions in (parentheses)
- Makes conversations flow like poetry
"""},
{"role": "user", "content": f"Transform this into a masterpiece:\n\n{text}"}
],
temperature=0.3
)
return response.choices[0].message.content
except Exception as e:
return f"๐ญ Stage fright! Error: {str(e)}"
๐งช Testing Our Creation
๐ฏ The Test Master
@dataclass
class TestResults:
"""๐ Keeping score of our system's performance"""
accuracy: float
wer: float # Word Error Rate
processing_time: float
confidence_score: float
class TestMaster:
"""๐งช Making sure our magic actually works"""
def __init__(self):
self.test_cases = [
{
"name": "Clear Speech Test",
"file": "test_files/clear_speech.wav",
"expected": "test_files/clear_speech.txt",
"difficulty": "easy"
},
{
"name": "Noisy Battle Test",
"file": "test_files/noisy.wav",
"expected": "test_files/noisy.txt",
"difficulty": "hard"
}
]
self.results = {}
async def run_tests(self):
"""๐ฏ Running our test gauntlet"""
for test in self.test_cases:
print(f"๐โโ๏ธ Running {test['name']}...")
# Time the spell casting
start_time = time.time()
# Cast our transcription spell
result = await AudioWizard().transcribe_audio(test['file'])
# Calculate our accuracy scores
metrics = self.calculate_metrics(
result['text'],
self.load_expected_text(test['expected']),
time.time() - start_time
)
self.results[test['name']] = metrics
print(f"โ
Test completed with {metrics.accuracy:.1f}% accuracy!")
def calculate_metrics(self, result, expected, time_taken):
"""๐ Measuring our magical accuracy"""
return TestResults(
accuracy=self.calculate_accuracy(result, expected),
wer=self.calculate_wer(result, expected),
processing_time=time_taken,
confidence_score=self.calculate_confidence(result)
)
๐ The Results Visualizer
class ResultsVisualizer:
"""๐ Making our test results look magical"""
def visualize_results(self, results: Dict[str, TestResults]):
"""Create a fancy visualization of our test results"""
plt.figure(figsize=(12, 6))
# Plot accuracy scores
tests = list(results.keys())
accuracies = [r.accuracy for r in results.values()]
plt.bar(tests, accuracies, color='skyblue')
plt.title("๐ฏ Accuracy Scores", fontsize=15)
plt.ylabel("Accuracy (%)")
# Add some sparkle โจ
for i, v in enumerate(accuracies):
plt.text(i, v + 1, f"{v:.1f}%",
ha='center', fontsize=12)
plt.show()
๐ Leveling Up: Advanced Features
๐ The Feature Expander
class FeatureExpander:
"""๐ Adding new magical powers"""
async def add_speaker_diarization(self, audio):
"""๐ฅ Identifying different speakers"""
try:
# Your speaker diarization code here
print("๐ญ Identifying speakers...")
except Exception as e:
print(f"๐จ Speaker identification failed: {str(e)}")
async def add_sentiment_analysis(self, text):
"""๐ Understanding emotional tones"""
try:
response = await self.client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content":
"Analyze the emotional tone of this conversation"},
{"role": "user", "content": text}
]
)
return response.choices[0].message.content
except Exception as e:
return f"๐ Emotion detection failed: {str(e)}"
๐งช Testing New Features
class FeatureTester:
"""๐งช Testing our new magical powers"""
async def test_new_feature(self, feature_name: str, test_data: Dict):
print(f"๐งช Testing {feature_name}...")
start_time = time.time()
try:
# Run the feature
result = await self.run_feature(feature_name, test_data)
# Validate the results
validation = self.validate_feature(feature_name, result)
# Calculate metrics
metrics = {
"success": validation["success"],
"processing_time": time.time() - start_time,
"accuracy": validation["accuracy"]
}
return {
"status": "โ
Test passed!" if validation["success"]
else "โ Test failed!",
"metrics": metrics
}
except Exception as e:
return {
"status": "๐ฅ Test crashed!",
"error": str(e)
}
โ๏ธ Battle-Testing in Production
๐ Performance Testing
class PerformanceTester:
"""โก Making sure our system is speed-champion"""
async def run_performance_test(self, test_duration: int = 3600):
"""๐โโ๏ธ Running performance marathon"""
print("๐ Starting performance test...")
metrics = {
"processed_files": 0,
"average_processing_time": 0,
"error_rate": 0,
"memory_usage": []
}
start_time = time.time()
while time.time() - start_time < test_duration:
# Process test file
result = await self.process_test_file()
# Update metrics
metrics["processed_files"] += 1
metrics["average_processing_time"] = (
(metrics["average_processing_time"] *
(metrics["processed_files"] - 1) +
result["processing_time"]) /
metrics["processed_files"]
)
# Monitor memory
metrics["memory_usage"].append(
psutil.Process().memory_info().rss / 1024 / 1024
)
return metrics
๐ Results Dashboard
class ResultsDashboard:
"""๐ Creating our magical mission control center"""
def create_dashboard(self, metrics: Dict):
"""๐ Building a fancy dashboard"""
fig = plt.figure(figsize=(15, 10))
# Processing time graph
plt.subplot(2, 2, 1)
plt.plot(metrics["processing_times"])
plt.title("โก Processing Speed")
# Accuracy graph
plt.subplot(2, 2, 2)
plt.bar(["WER", "Accuracy"],
[metrics["wer"], metrics["accuracy"]])
plt.title("๐ฏ Accuracy Metrics")
# Memory usage
plt.subplot(2, 2, 3)
plt.plot(metrics["memory_usage"])
plt.title("๐พ Memory Usage")
# Error rates
plt.subplot(2, 2, 4)
plt.pie([metrics["success_rate"],
100 - metrics["success_rate"]],
labels=["Success", "Errors"])
plt.title("โ
Success Rate")
plt.tight_layout()
plt.show()
๐ฎ Putting It All Together
async def run_complete_system():
"""๐ฎ Running our complete magical system"""
# Initialize our system
print("๐งโโ๏ธ Initializing magical systems...")
audio_wizard = AudioWizard(os.getenv('OPENAI_API_KEY'))
test_master = TestMaster()
feature_expander = FeatureExpander()
# Run core functionality tests
print("\n๐งช Testing core magic...")
await test_master.run_tests()
# Test new features
print("\n๐ Testing new powers...")
feature_results = await FeatureTester().test_new_feature(
"sentiment_analysis",
{"text": "Sample conversation"}
)
# Run performance tests
print("\nโก Running speed tests...")
performance_metrics = await PerformanceTester().run_performance_test(
test_duration=300 # 5 minutes test
)
# Create dashboard
print("\n๐ Creating magical dashboard...")
ResultsDashboard().create_dashboard({
**test_master.results,
**performance_metrics
})
print("\nโจ All tests completed! โจ")
# Run everything!
await run_complete_system()
๐ Pro Tips for the Advanced Wizard
- ๐ง Optimization Spells
# Pre-process audio for better results
audio = (AudioProcessor(file)
.reduce_noise()
.normalize_volume()
.export())
- ๐ก๏ธ Error Handling Shield
# Protect against API timeouts
@retry(tries=3, delay=2, backoff=2)
async def protected_api_call():
# Your API calls here
pass
- โก Speed Enchantments
# Batch processing for multiple files
async def process_batch(files):
return await asyncio.gather(
*[process_file(f) for f in files]
)
๐ฏ Future Quests
Ready to take your system to the next level? Here are some epic quests to pursue:
-
๐ Multilingual Magic
- Add support for multiple languages
- Implement cross-language translation
- Add cultural context awareness
-
๐ญ Advanced Character Recognition
- Improve speaker diarization
- Add emotion detection
- Implement personality insights
-
โก Performance Boosters
- Implement caching
- Add distributed processing
- Optimize memory usage
๐ Victory Lap
Congratulations, brave developer! You've built a powerful call intelligence system, complete with:
- ๐ฏ Robust testing
- ๐ Performance monitoring
- ๐ Advanced features
- ๐ก๏ธ Error handling
Remember: The best code is like magic - it works reliably, but there's always room for more enchantments!
Now go forth and build amazing things! ๐โจ
P.S. If you find any bugs, they're not bugs - they're just unexpected features taking a vacation! ๐
Posted on November 4, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
November 4, 2024