Overview
Currently working at Postman Inc. as an Applied AI / Software Engineer, where I'm building AI-native features for developers. I'm focused on developing agentic workflows, integrating large language models at scale, and creating intelligent automation features that enhance the developer experience for millions of Postman users worldwide.
Key Achievements
- Built Postbot & Agent Mode - Developed agentic workflows in Postman, enabling intelligent automation and AI-assisted API development
- AI Consent and Settings Framework - Architected and implemented comprehensive AI consent, settings, and usage accrual workflows ensuring user privacy and transparency
- LLM Integration at Scale - Integrated large language models at scale using AI gateways to optimize workflows and deliver consistent performance
- Developer-First AI Features - Created AI-native features specifically designed for API developers, focusing on productivity and ease of use
Technical Implementation
AI Agent Development
Postbot & Agent Mode
Architected and developed Postbot, an AI assistant for Postman, along with Agent Mode for advanced agentic workflows. These features enable developers to:
- Generate API requests and tests using natural language
- Automatically debug and troubleshoot API issues
- Understand complex API documentation quickly
- Create automated workflows through conversational interfaces
The implementation involved:
- Designing prompt engineering strategies for API-specific use cases
- Building context-aware AI assistants that understand API specifications
- Implementing streaming responses for real-time user feedback
- Creating robust error handling and fallback mechanisms
AI Infrastructure & Scaling
LLM Integration at Scale
Integrated multiple large language models at scale using AI gateways, ensuring:
- High Availability: 99.9% uptime for AI features across global user base
- Cost Optimization: Intelligent routing and caching strategies to optimize LLM costs
- Performance: Sub-2-second response times for AI queries
- Model Flexibility: Support for multiple LLM providers with seamless switching
Implemented AI gateway architecture that provides:
- Load balancing across multiple LLM providers
- Rate limiting and quota management
- Request/response caching for improved performance
- Comprehensive monitoring and logging
Privacy & Compliance
AI Consent and Settings Framework
Developed comprehensive AI consent and settings infrastructure:
- User Consent Management: Granular controls for AI feature usage
- Usage Tracking: Real-time accrual and monitoring of AI usage
- Privacy Controls: Data handling and retention policies
- Audit Logging: Complete audit trails for compliance
This framework ensures:
- Transparent AI usage for enterprise customers
- GDPR and compliance requirement adherence
- User control over AI feature interactions
- Clear communication of AI capabilities and limitations
Technologies & Stack
Frontend Development
- React & TypeScript: Building scalable, type-safe UI components
- State Management: Redux for complex application state
- Real-time Updates: WebSocket integration for live AI responses
- Design System: Postman's design system for consistent UX
Backend Development
- Node.js: Microservices architecture for AI features
- TypeScript: Type-safe backend services
- MongoDB: Storing user preferences and AI interaction history
- Redis: Caching and session management
Cloud & DevOps
- AWS: EC2, Lambda, S3, CloudFront for scalable infrastructure
- Docker: Containerized deployments
- CI/CD: Automated testing and deployment pipelines
- Monitoring: Application performance monitoring and error tracking
AI & Machine Learning
- LLM Integration: OpenAI, Anthropic, and other providers
- Prompt Engineering: Optimizing prompts for API-specific use cases
- Vector Databases: Semantic search for API documentation
- AI Gateways: Portkey, LangChain for robust LLM orchestration
Impact & Results
User Adoption
- AI Feature Usage: Significant adoption of AI features across user base
- User Satisfaction: Positive feedback on AI assistant capabilities
- Time Savings: Reduced time for API testing and debugging
Technical Excellence
- Performance: Maintained sub-2-second response times for AI queries
- Reliability: 99.9% uptime for AI-powered features
- Scalability: Successfully handling millions of AI requests monthly
- Cost Efficiency: Optimized LLM usage reducing costs by 40%
Future Vision
Working on expanding AI capabilities in Postman including:
- Advanced agentic workflows for complex API scenarios
- Multi-agent collaboration for API testing
- Automated API documentation generation
- Intelligent API recommendation systems
- Enhanced context understanding for API debugging