Under Construction
This page is under construction. Please check back later for comprehensive guidance
Google Workspace Incident Response Playbooks¶
This document provides comprehensive incident response playbooks for Google Workspace security incidents. These playbooks include detection, investigation, containment, eradication, recovery, and post-incident analysis phases for common Google Workspace security incidents.
Playbook Structure¶
Each incident response playbook follows a standardized structure:
- Incident Overview
- Description and potential impact
- Common attack vectors and indicators
-
Severity classification
-
Detection and Analysis
- Initial indicators and alerts
- Investigation procedures
-
Evidence collection guidance
-
Containment Procedures
- Immediate containment actions
- Secondary containment measures
-
Limiting incident spread
-
Eradication Steps
- Removing attacker access
- Eliminating persistence mechanisms
-
Validating complete removal
-
Recovery Guidance
- Service restoration
- Security posture strengthening
-
Return to normal operations
-
Post-Incident Analysis
- Root cause identification
- Lessons learned documentation
- Security improvements implementation
Account Compromise Incident Response¶
Incident Overview¶
Description:
Unauthorized access to one or more Google Workspace user accounts, potentially resulting in data theft, further access, or destructive actions.
Common Attack Vectors: - Phishing campaigns targeting credentials - Password spraying or brute force attacks - OAuth token theft/abuse - Session hijacking - Recovery email/phone compromise
Severity Classifications: - Critical: Admin account, executive, or multiple user compromises - High: Single user with access to sensitive data - Medium: Single user with limited access - Low: Service account with minimal permissions
Detection and Analysis¶
Initial Indicators: - Unusual login locations, times, or devices - Suspicious email forwarding rules or filters - Unexpected OAuth application authorizations - Unusual file access or download patterns - Password reset or recovery setting changes
Investigation Procedures:
-
Login Activity Analysis
-
Account Setting Changes
-
Email Configuration Review
-
OAuth Application Assessment
-
User Activity Timeline Construction
# Python script to create comprehensive user activity timeline def create_user_activity_timeline(reports_service, user_email, days_back=7): """Create a timeline of all user activities across Google Workspace""" # Calculate start time (days back from now) start_time = (datetime.now() - timedelta(days=days_back)).strftime('%Y-%m-%d') # Applications to query applications = ['login', 'admin', 'drive', 'token', 'user_accounts', 'groups', 'calendar', 'gmail', 'gcp'] all_activities = [] # Collect activities across all applications for app in applications: try: results = reports_service.activities().list( userKey=user_email, applicationName=app, startTime=start_time, maxResults=1000 ).execute() if 'items' in results: for item in results['items']: # Extract basic event info event = { 'time': item.get('id', {}).get('time'), 'application': app, 'event_name': item.get('events', [{}])[0].get('name', 'unknown'), 'ip_address': item.get('ipAddress', 'unknown'), 'parameters': {} } # Extract parameters parameters = item.get('events', [{}])[0].get('parameters', []) for param in parameters: event['parameters'][param.get('name')] = param.get('value') all_activities.append(event) except Exception as e: print(f"Error collecting {app} activities: {str(e)}") # Sort all activities by timestamp all_activities.sort(key=lambda x: x['time']) return all_activities
-
Evidence Collection
- Capture screenshots of suspicious activities
- Export relevant logs for preservation
- Document all detected indicators
- Record incident timeline based on evidence
Containment Procedures¶
Immediate Actions:
-
Account Suspension
-
Password Reset
-
OAuth Token Revocation
-
Session Termination
Secondary Containment:
-
Email Security Measures
-
Access Restrictions
-
Document Access Review
-
Group Membership Assessment
Eradication Steps¶
- Email Configuration Cleanup
# Python script to identify and remove suspicious email configurations
def cleanup_suspicious_email_configs(admin_service, gmail_service, user_email):
"""Remove suspicious email configurations"""
cleanup_report = {
'forwarding_removed': [],
'filters_removed': [],
'delegates_removed': []
}
try:
# Check and remove suspicious forwarding
forwarding = gmail_service.users().settings().getForwarding(
userId=user_email
).execute()
if forwarding.get('enabled') and not is_authorized_forwarding(forwarding.get('forwardingEmail')):
gmail_service.users().settings().updateForwarding(
userId=user_email,
body={'enabled': False}
).execute()
cleanup_report['forwarding_removed'].append(forwarding.get('forwardingEmail'))
# Check and remove suspicious filters
filters = gmail_service.users().settings().filters().list(
userId=user_email
).execute()
for filter_data in filters.get('filter', []):
if is_suspicious_filter(filter_data):
gmail_service.users().settings().filters().delete(
userId=user_email,
id=filter_data['id']
).execute()
cleanup_report['filters_removed'].append(filter_data)
# Check and remove unauthorized delegates
delegates = gmail_service.users().settings().delegates().list(
userId=user_email
).execute()
for delegate in delegates.get('delegates', []):
if not is_authorized_delegate(delegate.get('delegateEmail')):
gmail_service.users().settings().delegates().delete(
userId=user_email,
delegateEmail=delegate.get('delegateEmail')
).execute()
cleanup_report['delegates_removed'].append(delegate.get('delegateEmail'))
return cleanup_report
except Exception as e:
return {'error': str(e)}
# Helper functions
def is_authorized_forwarding(email):
"""Check if email is in allowed forwarding list"""
# Replace with your organization's logic
allowed_domains = ['company.com', 'trusted-partner.com']
return any(email.endswith('@' + domain) for domain in allowed_domains)
def is_suspicious_filter(filter_data):
"""Determine if a filter appears suspicious"""
# Look for common suspicious patterns
if 'forward' in str(filter_data).lower():
return True
if any(term in str(filter_data).lower() for term in
['security', 'password', 'verification', 'authenticate']):
return True
# Add additional detection logic as needed
return False
def is_authorized_delegate(email):
"""Check if delegate is authorized"""
# Replace with your organization's logic
authorized_delegates = ['admin@company.com', 'executive-assistant@company.com']
return email in authorized_delegates
-
2FA Reset and Reconfiguration
-
Recovery Method Verification
-
Application Access Review and Cleanup
-
Permission Verification
Recovery Guidance¶
-
Account Restoration
-
Enhanced Security Implementation
-
Service Restoration Verification
- Confirm email flow is normal
- Verify access to necessary applications
- Validate file access and sharing capabilities
-
Ensure calendar and meeting functionality
-
User Communication and Education
- Provide security awareness refresher
- Explain incident and response actions
- Advise on security best practices
- Schedule follow-up security training
Post-Incident Analysis¶
- Root Cause Identification
- Determine initial access vector
- Identify security control failures
- Document complete attack timeline
-
Assess effectiveness of security controls
-
Security Enhancement Planning
- Identify specific control improvements
- Develop enhancement implementation plan
- Create timeline for security upgrades
-
Allocate resources for improvements
-
Documentation and Reporting
- Complete incident documentation
- Prepare executive summary
- Document lessons learned
-
Create remediation tracking document
-
Process Improvement
- Update detection capabilities
- Enhance response procedures
- Improve containment strategies
- Strengthen recovery processes
Data Exfiltration Incident Response¶
Incident Overview¶
Description:
Unauthorized extraction of sensitive data from Google Workspace services, including Drive, Gmail, Calendar, or other collaboration tools.
Common Attack Vectors: - Google Takeout exports - Mass document downloads - Email forwarding rules - OAuth applications with extensive access - Unauthorized API access to data - Third-party integrations with excessive permissions
Severity Classifications: - Critical: Mass exfiltration of regulated or sensitive data - High: Targeted exfiltration of sensitive information - Medium: Limited exposure of internal information - Low: Export of non-sensitive, non-proprietary data
Detection and Analysis¶
Initial Indicators: - Unusual volume of file downloads - Suspicious Google Takeout exports - Unexpected external sharing of documents - Unusual API request patterns - Data access from atypical locations or times
Investigation Procedures:
-
Drive Activity Analysis
-
Data Access Volume Assessment
# Python script to detect unusual data access volumes def detect_unusual_data_access(reports_service, timeframe_days=3): """Identify users with unusually high data access volumes""" # Calculate start time start_time = (datetime.now() - timedelta(days=timeframe_days)).strftime('%Y-%m-%d') # Get Drive activities drive_activities = reports_service.activities().list( userKey='all', applicationName='drive', eventName='download,view,export', startTime=start_time, maxResults=1000 ).execute() # Aggregate by user user_activity = {} for activity in drive_activities.get('items', []): user = activity['actor']['email'] if user not in user_activity: user_activity[user] = { 'download_count': 0, 'view_count': 0, 'export_count': 0, 'files_accessed': set(), 'access_ips': set() } # Extract event details event = activity.get('events', [{}])[0] event_name = event.get('name', '') # Update counts if 'download' in event_name.lower(): user_activity[user]['download_count'] += 1 elif 'view' in event_name.lower(): user_activity[user]['view_count'] += 1 elif 'export' in event_name.lower(): user_activity[user]['export_count'] += 1 # Add file ID doc_id = next((p.get('value') for p in event.get('parameters', []) if p.get('name') == 'doc_id'), None) if doc_id: user_activity[user]['files_accessed'].add(doc_id) # Add IP address if activity.get('ipAddress'): user_activity[user]['access_ips'].add(activity.get('ipAddress')) # Convert sets to counts for easier analysis for user in user_activity: user_activity[user]['unique_files_accessed'] = len(user_activity[user]['files_accessed']) user_activity[user]['unique_ips'] = len(user_activity[user]['access_ips']) user_activity[user]['ip_list'] = list(user_activity[user]['access_ips']) del user_activity[user]['files_accessed'] del user_activity[user]['access_ips'] # Calculate organizational averages org_avg = { 'download_avg': 0, 'view_avg': 0, 'export_avg': 0, 'files_avg': 0 } if user_activity: user_count = len(user_activity) download_total = sum(u['download_count'] for u in user_activity.values()) view_total = sum(u['view_count'] for u in user_activity.values()) export_total = sum(u['export_count'] for u in user_activity.values()) files_total = sum(u['unique_files_accessed'] for u in user_activity.values()) org_avg = { 'download_avg': download_total / user_count, 'view_avg': view_total / user_count, 'export_avg': export_total / user_count, 'files_avg': files_total / user_count } # Identify outliers (2x average as simple threshold) outliers = [] for user, stats in user_activity.items(): if (stats['download_count'] > org_avg['download_avg'] * 2 or stats['export_count'] > org_avg['export_avg'] * 2): outliers.append({ 'user': user, 'stats': stats, 'org_comparison': { 'download_ratio': stats['download_count'] / max(org_avg['download_avg'], 1), 'export_ratio': stats['export_count'] / max(org_avg['export_avg'], 1) } }) return { 'org_averages': org_avg, 'outliers': sorted(outliers, key=lambda x: max( x['org_comparison']['download_ratio'], x['org_comparison']['export_ratio'] ), reverse=True) }
-
Google Takeout Analysis
-
External Sharing Review
-
OAuth Application Assessment
-
Evidence Collection
- Document data access patterns
- Capture screenshots of relevant logs
- Create timeline of data access events
- List potentially exposed documents
- Preserve original logs for investigation
Containment Procedures¶
Immediate Actions:
-
Account Access Restriction
-
External Sharing Limitation
-
Data Export Restriction
-
OAuth Application Blocking
Secondary Containment:
-
Drive Access Control Review
-
API Access Limitation
-
Data Loss Prevention Enhancement
Eradication Steps¶
- Remove Unauthorized Access
# Python script to revert unauthorized sharing
def revert_unauthorized_sharing(drive_service, start_date):
"""Revert unauthorized document sharing since given date"""
# Convert date to RFC 3339 timestamp
time_filter = f"modifiedTime > '{start_date}'"
# Get files with sharing changes
shared_files = drive_service.files().list(
q=time_filter,
spaces='drive',
fields='files(id,name,permissions)',
pageSize=1000
).execute()
revocation_results = []
for file in shared_files.get('files', []):
file_id = file['id']
file_name = file['name']
# Check each permission
for permission in file.get('permissions', []):
# Identify suspicious permissions (external or anyone)
if is_suspicious_permission(permission):
try:
# Delete suspicious permission
drive_service.permissions().delete(
fileId=file_id,
permissionId=permission['id']
).execute()
revocation_results.append({
'file_id': file_id,
'file_name': file_name,
'permission': permission,
'status': 'revoked'
})
except Exception as e:
revocation_results.append({
'file_id': file_id,
'file_name': file_name,
'permission': permission,
'status': 'error',
'error': str(e)
})
return revocation_results
def is_suspicious_permission(permission):
"""Determine if a permission is suspicious based on criteria"""
# Check for 'anyone' access
if permission.get('type') == 'anyone':
return True
# Check for external email domains
if permission.get('type') == 'user' and permission.get('emailAddress'):
if not permission['emailAddress'].endswith('@company.com'): # Replace with your domain
return True
# Add additional logic as needed
return False
-
Clean Up Automated Access
-
Revoke OAuth Authorizations
-
Remove Persistent Access Mechanisms
- Disable unauthorized service accounts
- Remove compromised API keys
- Delete unauthorized Apps Script projects
- Uninstall malicious Workspace Add-ons
Recovery Guidance¶
-
Service Restoration
-
Data Access Control Implementation
-
DLP Enhancement
-
API Security Hardening
Post-Incident Analysis¶
- Data Exposure Assessment
- Identify all exposed documents
- Determine sensitivity of exposed data
- Assess regulatory compliance impact
-
Calculate potential business impact
-
Control Gap Identification
- Identify DLP control failures
- Assess sharing restriction effectiveness
- Evaluate export control limitations
-
Review API security measures
-
Security Enhancement Implementation
- Develop enhanced data protection controls
- Implement improved monitoring capabilities
- Deploy additional access restrictions
-
Create data classification system
-
Documentation and Reporting
- Prepare data exposure report
- Document control recommendations
- Create incident summary
- Develop security enhancement roadmap
OAuth Token Abuse Incident Response¶
Incident Overview¶
Description:
Unauthorized access or activities using compromised OAuth tokens, potentially resulting in persistent access despite password changes and authentication enhancements.
Common Attack Vectors: - Phishing campaigns for OAuth authorization - Malicious third-party applications - Token theft via client-side attacks - Excessive permission requests from applications - Stolen refresh tokens from compromised devices
Severity Classifications: - Critical: Administrative access token compromise - High: User token with extensive data access - Medium: Limited-scope token for specific services - Low: Minimal permission token with no sensitive data access
Detection and Analysis¶
Initial Indicators: - Unusual application authorization activity - API usage from unexpected locations - Application access despite password changes - Abnormal API request patterns or volumes - Unusual scopes requested by applications
Investigation Procedures:
-
Token Activity Analysis
-
Application Authorization Review
-
API Usage Analysis
# Python script to analyze API usage patterns def analyze_api_usage(reports_service, days_back=7): """Analyze API usage patterns for anomalies""" # Calculate start time start_time = (datetime.now() - timedelta(days=days_back)).strftime('%Y-%m-%d') # Get API activity events api_events = reports_service.activities().list( userKey='all', applicationName='token', startTime=start_time, maxResults=1000 ).execute() # Analyze application usage app_activity = {} user_app_activity = {} for event in api_events.get('items', []): # Extract basic information user = event['actor']['email'] parameters = event.get('events', [{}])[0].get('parameters', []) app_name = next((p.get('value') for p in parameters if p.get('name') == 'app_name'), 'Unknown') client_id = next((p.get('value') for p in parameters if p.get('name') == 'client_id'), 'Unknown') scopes = next((p.get('value') for p in parameters if p.get('name') == 'scope'), '').split() # Track by application app_key = f"{app_name}:{client_id}" if app_key not in app_activity: app_activity[app_key] = { 'app_name': app_name, 'client_id': client_id, 'user_count': set(), 'ip_addresses': set(), 'scope_count': len(scopes), 'scopes': scopes, 'high_risk_scopes': [s for s in scopes if is_high_risk_scope(s)], 'event_count': 0 } app_activity[app_key]['user_count'].add(user) if event.get('ipAddress'): app_activity[app_key]['ip_addresses'].add(event.get('ipAddress')) app_activity[app_key]['event_count'] += 1 # Track by user and application user_app_key = f"{user}:{app_key}" if user_app_key not in user_app_activity: user_app_activity[user_app_key] = { 'user': user, 'app_name': app_name, 'client_id': client_id, 'ip_addresses': set(), 'scopes': scopes, 'event_count': 0, 'first_seen': event.get('id', {}).get('time'), 'last_seen': event.get('id', {}).get('time') } user_app_activity[user_app_key]['event_count'] += 1 if event.get('ipAddress'): user_app_activity[user_app_key]['ip_addresses'].add(event.get('ipAddress')) # Update last seen event_time = event.get('id', {}).get('time') if event_time and event_time > user_app_activity[user_app_key]['last_seen']: user_app_activity[user_app_key]['last_seen'] = event_time # Convert sets to counts and lists for app_key in app_activity: app_activity[app_key]['user_count'] = len(app_activity[app_key]['user_count']) app_activity[app_key]['ip_count'] = len(app_activity[app_key]['ip_addresses']) app_activity[app_key]['ip_list'] = list(app_activity[app_key]['ip_addresses']) del app_activity[app_key]['ip_addresses'] for user_app_key in user_app_activity: user_app_activity[user_app_key]['ip_count'] = len(user_app_activity[user_app_key]['ip_addresses']) user_app_activity[user_app_key]['ip_list'] = list(user_app_activity[user_app_key]['ip_addresses']) del user_app_activity[user_app_key]['ip_addresses'] # Identify suspicious applications suspicious_apps = [] for app_key, data in app_activity.items(): risk_score = calculate_app_risk_score(data) if risk_score > 7: # Arbitrary threshold suspicious_apps.append({ 'app_key': app_key, 'risk_score': risk_score, 'data': data }) return { 'app_activity': app_activity, 'user_app_activity': user_app_activity, 'suspicious_apps': sorted(suspicious_apps, key=lambda x: x['risk_score'], reverse=True) } def is_high_risk_scope(scope): """Determine if a scope is high-risk""" high_risk_indicators = [ 'https://mail.google.com/', 'https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/admin', 'https://www.googleapis.com/auth/cloud-platform', '.readonly', # Wildcard for read access to services 'https://www.googleapis.com/auth/contacts', 'https://www.googleapis.com/auth/calendar' ] return any(indicator in scope for indicator in high_risk_indicators) def calculate_app_risk_score(app_data): """Calculate a risk score for an application based on various factors""" risk_score = 0 # More users = higher potential impact if app_data['user_count'] > 10: risk_score += 2 elif app_data['user_count'] > 5: risk_score += 1 # High number of scopes increases risk if app_data['scope_count'] > 10: risk_score += 3 elif app_data['scope_count'] > 5: risk_score += 2 elif app_data['scope_count'] > 2: risk_score += 1 # High-risk scopes significantly increase risk risk_score += min(len(app_data['high_risk_scopes']) * 2, 6) # Multiple IP addresses may indicate broader usage if len(app_data['ip_list']) > 5: risk_score += 1 return risk_score
-
Token Revocation Testing
- Test revocation impact on applications
- Identify applications unaffected by password changes
-
Verify token expiration enforcement
-
Scope Analysis
- Identify token scopes and permissions
- Evaluate potential access granted
-
Assess data exposure risk
-
Evidence Collection
- Document affected applications and tokens
- Capture relevant API access logs
- Record suspicious token activity
- Preserve authorization request evidence
Containment Procedures¶
Immediate Actions:
-
Token Revocation
-
Account Security Enhancement
-
Application Blocking
Secondary Containment:
-
Access Scope Restriction
-
API Access Monitoring
-
Application Authorization Controls
Eradication Steps¶
-
Comprehensive Token Revocation
# Python script to revoke tokens for specific applications def revoke_application_tokens(reports_service, admin_service, client_id): """Revoke tokens for a specific client ID across all users""" # Get all users users_result = admin_service.users().list( customer='my_customer', maxResults=500 ).execute() revocation_results = [] for user in users_result.get('users', []): user_email = user['primaryEmail'] # Get token events to identify tokens for this client try: token_events = reports_service.activities().list( userKey=user_email, applicationName='token', maxResults=1000 ).execute() # Look for matching client ID has_token = False for event in token_events.get('items', []): parameters = event.get('events', [{}])[0].get('parameters', []) event_client_id = next((p.get('value') for p in parameters if p.get('name') == 'client_id'), None) if event_client_id == client_id: has_token = True break if has_token: # Revoke access for this user try: admin_service.tokens().delete( userKey=user_email, clientId=client_id ).execute() revocation_results.append({ 'user': user_email, 'status': 'revoked' }) except Exception as e: revocation_results.append({ 'user': user_email, 'status': 'error', 'error': str(e) }) except Exception as e: revocation_results.append({ 'user': user_email, 'status': 'lookup_error', 'error': str(e) }) return revocation_results
-
OAuth App Removal
-
Service Account Cleanup
-
Extension and Add-on Review
Recovery Guidance¶
-
Application Access Restoration
-
API Control Enhancement
-
OAuth Governance Implementation
- Deploy OAuth application inventory process
- Implement regular access reviews
- Create application authorization framework
-
Establish scope restriction policies
-
User Communication and Training
- Provide guidance on safe application authorization
- Educate on permission evaluation
- Deploy phishing awareness for OAuth-based attacks
- Create clear escalation procedures
Post-Incident Analysis¶
- Token Abuse Impact Assessment
- Identify accessed data and services
- Determine duration of unauthorized access
- Assess regulatory compliance impact
-
Document business impact
-
OAuth Security Evaluation
- Review OAuth authorization controls
- Assess token management practices
- Evaluate monitoring effectiveness
-
Identify security control gaps
-
Documentation and Reporting
- Document affected applications and users
- Create comprehensive timeline
- Prepare executive summary
-
Develop token security roadmap
-
Security Enhancement Implementation
- Deploy OAuth governance framework
- Implement enhanced monitoring
- Establish token usage baselines
- Create application vetting process
Admin Privilege Escalation Incident Response¶
Incident Overview¶
Description:
Unauthorized acquisition or use of administrative privileges within Google Workspace, potentially resulting in organization-wide security compromises, configuration changes, or data access.
Common Attack Vectors: - Phishing attacks targeting admins - Role privilege elevation through compromised accounts - Creation of unauthorized admin accounts - Exploitation of delegated admin rights - OAuth application with admin capabilities - Recovery email/phone compromise for admins
Severity Classifications: - Critical: Super admin compromise - High: Privilege escalation to admin with significant rights - Medium: Limited admin role compromise - Low: Attempted but unsuccessful admin privilege acquisition
Detection and Analysis¶
Initial Indicators: - Unexpected admin role assignments - Unusual administrative console access - Configuration changes without approval - Suspicious admin activities from unusual locations - Creation of new admin accounts - Modification of admin recovery settings
Investigation Procedures:
-
Admin Activity Analysis
-
Admin Role Assessment
-
Admin Access Pattern Analysis
# Python script to analyze admin access patterns def analyze_admin_access_patterns(reports_service, days_back=30): """Analyze admin console access patterns for anomalies""" # Calculate start time start_time = (datetime.now() - timedelta(days=days_back)).strftime('%Y-%m-%d') # Get admin console activities admin_events = reports_service.activities().list( userKey='all', applicationName='admin', startTime=start_time, maxResults=1000 ).execute() # Organize by user admin_access = {} for event in admin_events.get('items', []): user = event['actor']['email'] event_time = event.get('id', {}).get('time') event_name = event.get('events', [{}])[0].get('name') ip_address = event.get('ipAddress') if user not in admin_access: admin_access[user] = { 'access_count': 0, 'ip_addresses': set(), 'common_events': {}, 'access_times': [], 'first_seen': event_time, 'last_seen': event_time, 'critical_events': [] } # Update user data admin_access[user]['access_count'] += 1 if ip_address: admin_access[user]['ip_addresses'].add(ip_address) # Track event frequency if event_name not in admin_access[user]['common_events']: admin_access[user]['common_events'][event_name] = 0 admin_access[user]['common_events'][event_name] += 1 # Update access times admin_access[user]['access_times'].append(event_time) # Update first/last seen if event_time < admin_access[user]['first_seen']: admin_access[user]['first_seen'] = event_time if event_time > admin_access[user]['last_seen']: admin_access[user]['last_seen'] = event_time # Check for critical events if is_critical_admin_event(event): admin_access[user]['critical_events'].append({ 'event_name': event_name, 'time': event_time, 'ip_address': ip_address, 'details': extract_event_details(event) }) # Process data for analysis admin_insights = [] for user, data in admin_access.items(): # Convert sets to lists/counts data['ip_count'] = len(data['ip_addresses']) data['ip_list'] = list(data['ip_addresses']) del data['ip_addresses'] # Sort critical events by time data['critical_events'].sort(key=lambda x: x['time']) # Calculate working hours access percentage work_hours_access = calculate_work_hours_percentage(data['access_times']) data['work_hours_percentage'] = work_hours_access # Calculate common events data['common_events'] = sorted( [{'event': k, 'count': v} for k, v in data['common_events'].items()], key=lambda x: x['count'], reverse=True ) # Calculate anomaly score anomaly_score = calculate_admin_anomaly_score(data) admin_insights.append({ 'user': user, 'data': data, 'anomaly_score': anomaly_score }) # Sort by anomaly score admin_insights.sort(key=lambda x: x['anomaly_score'], reverse=True) return admin_insights def is_critical_admin_event(event): """Determine if an admin event is critical""" critical_event_types = [ 'GRANT_ADMIN_PRIVILEGE', 'CREATE_ROLE', 'CHANGE_ROLE_SCOPE', 'ADD_ROLE_PRIVILEGE', 'CREATE_USER', 'CHANGE_USER_PASSWORD', 'TOGGLE_SERVICE_ACCOUNT', 'ADD_SERVICE_ACCOUNT_PRIVILEGE', 'CHANGE_TWO_STEP_VERIFICATION', 'CHANGE_RECOVERY_EMAIL', 'CHANGE_RECOVERY_PHONE' ] event_name = event.get('events', [{}])[0].get('name') return event_name in critical_event_types def extract_event_details(event): """Extract important details from an event""" details = {} parameters = event.get('events', [{}])[0].get('parameters', []) for param in parameters: details[param.get('name')] = param.get('value') return details def calculate_work_hours_percentage(timestamps): """Calculate percentage of access during work hours (8AM-6PM local time)""" if not timestamps: return 0 work_hours_count = 0 for timestamp in timestamps: # Convert to datetime object try: dt = datetime.fromisoformat(timestamp.replace('Z', '+00:00')) # Check if access is during work hours (8AM-6PM local time, assuming UTC) hour = dt.hour if 8 <= hour < 18 and dt.weekday() < 5: # Weekday and work hours work_hours_count += 1 except: pass return (work_hours_count / len(timestamps)) * 100 def calculate_admin_anomaly_score(data): """Calculate an anomaly score for admin activity""" score = 0 # Multiple IPs increases risk if data['ip_count'] > 3: score += min(data['ip_count'] - 3, 5) # Low work hours percentage is suspicious if data['work_hours_percentage'] < 70: score += min((70 - data['work_hours_percentage']) / 10, 5) # Critical events increase risk significantly score += min(len(data['critical_events']) * 2, 10) return score
-
Critical Configuration Change Review
-
Evidence Collection
- Document all suspicious admin activities
- Capture screenshots of admin console logs
- Create timeline of privilege escalation events
- Preserve original audit logs for investigation
Containment Procedures¶
Immediate Actions:
-
Admin Account Security
-
Role Assignment Review and Restriction
-
Critical Function Restriction
-
Secure Super Admin Access
Secondary Containment:
-
Admin Console Access Restriction
-
API Access Limitation
-
Delegated Administration Review
Eradication Steps¶
-
Remove Unauthorized Access
-
Revert Unauthorized Changes
-
Rebuild Admin Structure
-
Clean Up Persistent Access
Recovery Guidance¶
-
Admin Access Restoration
-
Admin Privilege Enhancement
-
Multi-Party Authorization Implementation
- Develop multi-admin approval for critical changes
- Create change management processes
- Implement administrative separation of duties
-
Establish emergency access procedures
-
Enhanced Monitoring Deployment
Post-Incident Analysis¶
- Admin Access Impact Assessment
- Document unauthorized admin actions
- Assess impact of configuration changes
- Determine potential data access or exposure
-
Evaluate regulatory compliance implications
-
Admin Security Control Evaluation
- Review administrative access controls
- Assess privilege management practices
- Evaluate separation of duties implementation
-
Identify security control gaps
-
Documentation and Reporting
- Create comprehensive incident timeline
- Document security recommendations
- Prepare executive summary
-
Develop admin security roadmap
-
Security Enhancement Implementation
- Deploy enhanced admin access controls
- Implement comprehensive audit procedures
- Establish admin baseline behavior profiles
- Create admin security verification processes
Malicious Application Installation Incident Response¶
Incident Overview¶
Description:
Installation of unauthorized, malicious, or excessively permissioned applications in the Google Workspace environment, potentially resulting in data exposure, credential theft, or persistent unauthorized access.
Common Attack Vectors: - Malicious Marketplace applications - Phishing campaigns for application authorization - Third-party applications with excessive permissions - Rogue browser extensions with Workspace access - Malicious Apps Script implementations - OAuth phishing for application installation
Severity Classifications: - Critical: Admin-level application with widespread deployment - High: Data access application with significant adoption - Medium: Limited access application with targeted deployment - Low: Minimal risk application with restricted permissions
Detection and Analysis¶
Initial Indicators: - Unusual application installation patterns - Applications requesting excessive permissions - User reports of suspicious OAuth requests - Unexpected data access or sharing - Abnormal API usage from applications - Unusual Chrome extension behavior
Investigation Procedures:
-
Application Inventory Analysis
-
OAuth Token Activity Review
-
Marketplace Application Assessment
-
Chrome Extension Inventory
-
Application Behavior Analysis
# Python script to analyze application behavior for anomalies def analyze_application_behavior(reports_service, days_back=7): """Analyze application behavior for suspicious patterns""" # Calculate start time start_time = (datetime.now() - timedelta(days=days_back)).strftime('%Y-%m-%d') # Get application activities app_events = {} # Check various application types for app_type in ['token', 'drive', 'mobile']: events = reports_service.activities().list( userKey='all', applicationName=app_type, startTime=start_time, maxResults=1000 ).execute() app_events[app_type] = events.get('items', []) # Analyze OAuth application behavior oauth_apps = {} for event in app_events.get('token', []): # Extract application details parameters = event.get('events', [{}])[0].get('parameters', []) client_id = next((p.get('value') for p in parameters if p.get('name') == 'client_id'), 'unknown') app_name = next((p.get('value') for p in parameters if p.get('name') == 'app_name'), 'unknown') scopes = next((p.get('value') for p in parameters if p.get('name') == 'scope'), '').split() user = event['actor']['email'] # Create app entry if needed app_key = f"{app_name}:{client_id}" if app_key not in oauth_apps: oauth_apps[app_key] = { 'app_name': app_name, 'client_id': client_id, 'users': set(), 'scopes': scopes, 'scope_count': len(scopes), 'high_risk_scopes': [s for s in scopes if is_high_risk_scope(s)], 'ip_addresses': set(), 'access_count': 0, 'first_seen': event.get('id', {}).get('time'), 'last_seen': event.get('id', {}).get('time') } # Update app data oauth_apps[app_key]['users'].add(user) oauth_apps[app_key]['access_count'] += 1 if event.get('ipAddress'): oauth_apps[app_key]['ip_addresses'].add(event.get('ipAddress')) # Update time range event_time = event.get('id', {}).get('time') if event_time < oauth_apps[app_key]['first_seen']: oauth_apps[app_key]['first_seen'] = event_time if event_time > oauth_apps[app_key]['last_seen']: oauth_apps[app_key]['last_seen'] = event_time # Process data for analysis app_analysis = [] for app_key, data in oauth_apps.items(): # Convert sets to counts data['user_count'] = len(data['users']) data['unique_ips'] = len(data['ip_addresses']) data['ip_list'] = list(data['ip_addresses']) data['user_list'] = list(data['users']) del data['ip_addresses'] del data['users'] # Calculate risk score risk_score = calculate_app_risk_score(data) app_analysis.append({ 'app_key': app_key, 'data': data, 'risk_score': risk_score }) # Sort by risk score app_analysis.sort(key=lambda x: x['risk_score'], reverse=True) return app_analysis def is_high_risk_scope(scope): """Determine if a scope is high-risk""" high_risk_indicators = [ 'https://mail.google.com/', 'https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/admin', 'https://www.googleapis.com/auth/cloud-platform', '.readonly', # Wildcard for read access to services 'https://www.googleapis.com/auth/contacts', 'https://www.googleapis.com/auth/calendar' ] return any(indicator in scope for indicator in high_risk_indicators) def calculate_app_risk_score(app_data): """Calculate risk score for an application""" risk_score = 0 # More users = higher potential impact if app_data['user_count'] > 50: risk_score += 3 elif app_data['user_count'] > 10: risk_score += 2 elif app_data['user_count'] > 3: risk_score += 1 # High number of scopes increases risk if app_data['scope_count'] > 10: risk_score += 3 elif app_data['scope_count'] > 5: risk_score += 2 elif app_data['scope_count'] > 2: risk_score += 1 # High-risk scopes significantly increase risk risk_score += min(len(app_data['high_risk_scopes']), 5) # Rapid adoption may indicate phishing campaign first_seen = datetime.fromisoformat(app_data['first_seen'].replace('Z', '+00:00')) last_seen = datetime.fromisoformat(app_data['last_seen'].replace('Z', '+00:00')) adoption_period = (last_seen - first_seen).total_seconds() / 3600 # hours if adoption_period < 24 and app_data['user_count'] > 5: risk_score += 2 # Rapid adoption across multiple users # Multiple IP addresses may indicate broader usage or command & control if app_data['unique_ips'] > 10: risk_score += 2 elif app_data['unique_ips'] > 5: risk_score += 1 return risk_score
-
Evidence Collection
- Document suspicious applications
- Record affected users and scope of installation
- Capture OAuth permission request details
- Create timeline of application installation and usage
Containment Procedures¶
Immediate Actions:
-
Application Access Revocation
-
Marketplace Application Restriction
-
Chrome Extension Management
-
Apps Script Management
Secondary Containment:
-
API Access Limitation
-
Application Installation Control
-
User Communication
- Notify affected users of security incident
- Provide application revocation instructions
- Establish reporting procedures for suspicious apps
- Deploy phishing awareness for application-based attacks
Eradication Steps¶
-
Comprehensive Application Removal
-
Permission Cleanup
-
Browser Extension Cleanup
-
Apps Script Remediation
Recovery Guidance¶
-
Application Access Restoration
-
Application Control Implementation
-
Extension Governance
-
API Security Enhancement
Post-Incident Analysis¶
- Application Impact Assessment
- Identify scope of application installation
- Determine data access and potential exposure
- Assess user impact and workflow disruption
-
Document affected systems and services
-
Application Control Evaluation
- Review application governance processes
- Assess installation control effectiveness
- Evaluate permission management practices
-
Identify security control gaps
-
Documentation and Reporting
- Create comprehensive incident timeline
- Document affected applications and users
- Prepare executive summary
-
Develop application security roadmap
-
Security Enhancement Implementation
- Deploy application governance framework
- Implement comprehensive vetting process
- Establish application security baselines
- Create continuous monitoring procedures
Additional Resources and References¶
Incident Response Documentation Templates¶
-
Incident Response Report Template
# Google Workspace Security Incident Report ## Incident Overview - Incident ID: [ID] - Type: [Account Compromise, Data Exfiltration, OAuth Token Abuse, etc.] - Severity: [Critical, High, Medium, Low] - Date Detected: [Date] - Date Contained: [Date] - Date Resolved: [Date] - Affected Users/Systems: [List] ## Incident Timeline - [Date/Time] - [Event] - [Date/Time] - [Event] - [Date/Time] - [Event] ## Root Cause Analysis - [Detailed analysis of how the incident occurred] ## Impact Assessment - [Description of impact on systems, users, and data] ## Response Actions - [Actions taken to contain, eradicate, and recover] ## Lessons Learned - [Key takeaways and improvement opportunities] ## Recommendations - [Specific actions to prevent recurrence]
-
Communication Templates
- User Notification Template
- Executive Briefing Template
- Technical Team Communication Template
- Post-Incident Review Meeting Agenda
Google Workspace Security Resources¶
- Official Google Documentation
- Google Workspace Security Center
- Google Security Best Practices
-
Security Frameworks and Standards
- NIST Cybersecurity Framework
- SANS Incident Response Guide
-
Regulatory Compliance Resources
- GDPR Compliance
- HIPAA Compliance
- NIST 800-53 Controls
Note: These playbooks should be adapted to your organization's specific Google Workspace configuration, security posture, and compliance requirements.