Skip to content

Safety Mechanics โ€” Lantern โ€‹

Overview โ€‹

Safety is paramount in Lantern. As a platform that facilitates real-world meetings between strangers, we must implement comprehensive safety mechanics to protect users from harassment, abuse, and dangerous situations. This document outlines the design and implementation approach for critical safety features.


Core Safety Principles โ€‹

  1. User Control: Users must have immediate, accessible controls to manage their safety
  2. Privacy-First: Safety features must not compromise user anonymity or privacy
  3. Real-World Integration: Partner with established safety organizations and venues
  4. Proactive Protection: Detect and prevent abuse before it escalates
  5. Emergency Response: Provide immediate assistance when users are in danger
  6. Community Support: Build a network of safe spaces beyond traditional law enforcement

Feature 1: Block Functionality โ€‹

Purpose โ€‹

Allow users to completely prevent unwanted individuals from seeing or interacting with them, preventing harassment and stalking.

Design Requirements โ€‹

User Experience:

  • Block button accessible from any interaction point (wave received, active connection, chat)
  • One-tap blocking with optional feedback reason (for pattern detection)
  • Blocked status is permanent until user explicitly unblocks
  • Visual confirmation when block is successful

Backend Behavior:

  • Bidirectional invisibility: Neither user can see the other's lantern, waves, or presence
  • Historical data cleanup: Remove blocked user from all active connections and wave history
  • Pattern detection: Track if multiple users block the same individual (flag for moderation)
  • Cross-device sync: Block list syncs across all user devices in real-time

Privacy Considerations:

  • Blocked user is NOT notified (prevents retaliation)
  • Blocked user cannot determine they've been blocked (same UX as if target user is not present)
  • Block list is encrypted and never shared with third parties

Implementation Plan (TODO) โ€‹

javascript
// TODO: Implement Block Service (src/lib/blockService.js)
// - blockUser(userId, blockedUserId, reason?)
// - unblockUser(userId, blockedUserId)
// - getBlockedUsers(userId)
// - isUserBlocked(userId, targetUserId)
// - subscribeToBlockList(userId, callback)

// TODO: Firestore Data Model
// Collection: blocks
// Document: {userId}_{blockedUserId}
// Fields:
//   - userId: string (who initiated block)
//   - blockedUserId: string (who is blocked)
//   - reason?: string (optional, for pattern detection)
//   - createdAt: timestamp
//   - platform: string (web, ios, android)

// TODO: Firestore Security Rules
// - Users can only read their own block list
// - Users can only create blocks where userId matches auth.uid
// - Users cannot query who has blocked them

// TODO: UI Components
// - BlockButton component (confirm dialog + reason selection)
// - BlockedUsersList component in ProfileSettings
// - Block confirmation modal with "Are you sure?" messaging

Integration Points:

  • Dashboard.jsx: Add block button to wave notifications and active connections
  • Chat.jsx: Add block button to chat header
  • ProfileSettings.jsx: Add "Blocked Users" management section
  • Wave/Lantern Services: Filter out blocked users from all queries

Pattern Detection (Future Enhancement):

  • Track users who receive multiple blocks (threshold: 5+ blocks in 30 days)
  • Flag accounts for manual moderation review
  • Automated shadow banning for severe cases (user sees normal app, but invisible to others)
  • Aggregate anonymized block reasons to identify harassment patterns

Feature 1.5: Platform Bans & Account Suspension โ€‹

Purpose โ€‹

Distinguish between user-initiated blocks (individual safety control) and platform-initiated bans (enforcement against policy violations). Prevent repeat offenders from circumventing bans by creating new accounts.

Block vs. Ban: Key Differences โ€‹

User Block (Individual Control):

  • Who initiates: Individual user
  • Scope: Affects only the blocking user and blocked user (bidirectional invisibility)
  • Visibility: Other users can still see and interact with the blocked user
  • Duration: Permanent until user chooses to unblock
  • Purpose: Personal safety and harassment prevention
  • Privacy: Blocked user is NOT notified (prevents retaliation)
  • Example: "I don't want to interact with this person" โ†’ User blocks them

Platform Ban (Moderation Enforcement):

  • Who initiates: Lantern moderation team based on policy violations
  • Scope: Affects entire platform (user cannot interact with anyone)
  • Visibility: User account disabled, cannot light lanterns, send waves, or access app features
  • Duration: Temporary (suspension) or permanent (ban)
  • Purpose: Enforce community guidelines and protect all users
  • Notification: User is notified of ban with reason (appeals process available)
  • Example: User receives 10+ blocks in 30 days โ†’ Moderation review โ†’ Permanent ban

Account Ban Severity Levels โ€‹

Level 1: Warning (First Offense, Minor Violations):

  • User receives in-app warning with explanation of policy violation
  • No access restrictions
  • Warning logged for future reference
  • Examples: Spamming waves, inappropriate profile interests
  • Action: Educational message + "Please review our community guidelines"

Level 2: Temporary Suspension (Second Offense or Moderate Violations):

  • Account suspended for 7-30 days (escalating based on severity)
  • User cannot light lanterns, send waves, or access chats during suspension
  • User notified via email with suspension reason and duration
  • Examples: Repeated spam, aggressive behavior, multiple user reports
  • Action: "Your account has been suspended for 7 days due to [reason]"

Level 3: Shadow Ban (Pattern of Problematic Behavior):

  • User can use app normally BUT is invisible to other users
  • User sees normal UX (can light lanterns, send waves) but no one receives them
  • Used when outright ban would cause retaliation or multi-account creation
  • Examples: Suspected stalking, harassment patterns, excessive blocking (5+ blocks in 30 days)
  • Action: Invisible to users, visible to moderation team for monitoring

Level 4: Permanent Ban (Severe or Repeat Violations):

  • Account permanently disabled, cannot access app
  • All user data marked for deletion (30-day retention for legal/appeals)
  • Email notification with ban reason and appeals process
  • Examples: Threats of violence, sexual harassment, doxxing, severe ToS violations
  • Action: "Your account has been permanently banned for [reason]. You may appeal within 30 days."

Ban Escalation Workflow โ€‹

Automated Triggers (Require Manual Review):

  1. User receives 5+ blocks in 30 days โ†’ Flag for review
  2. User receives 10+ blocks in 30 days โ†’ Automatic temporary suspension + review
  3. User sends 100+ waves in 24 hours โ†’ Rate limit + flag for spam review
  4. User reported 3+ times for same violation type โ†’ Flag for review
  5. User creates new account within 24 hours of ban โ†’ Flag for ban evasion review

Manual Review Process:

  1. Moderation team reviews flagged account
  2. Examine block reasons, user reports, behavioral patterns
  3. Consider account history (warnings, prior suspensions)
  4. Determine appropriate action (warning, suspension, shadow ban, permanent ban)
  5. Document decision with evidence (screenshots, logs, witness reports)
  6. Notify user with clear explanation and appeals path

Appeals Process:

  • User can appeal ban within 30 days via email to appeals@ourlantern.app
  • Provide case number, account details, explanation
  • Different moderator reviews appeal (not original decision maker)
  • Decision communicated within 7 business days
  • Upheld ban: Account deleted after 30 days
  • Overturned ban: Account reinstated with apology

Re-Registration Prevention โ€‹

Challenge: Anonymous architecture + no photo verification makes it easy to create new accounts after a ban.

Multi-Layered Detection Approach:

Layer 1: Email & Phone Verification (Basic) โ€‹

  • Require email verification for account creation
  • Optional phone verification (SMS code) for enhanced security
  • Ban extends to verified email/phone (cannot reuse)
  • Limitation: Easy to use disposable emails or new phone numbers

Layer 2: Device Fingerprinting (Moderate) โ€‹

  • Collect device metadata without PII:
    • Browser fingerprint (user agent, screen resolution, timezone, language, fonts)
    • Device model and OS version (mobile apps)
    • Installation ID (generated on first app install)
  • Ban extends to device fingerprint
  • If banned device creates new account โ†’ Automatic flag for review
  • Limitation: User can change browsers, use VPN, factory reset device

Layer 3: Behavioral Fingerprinting (Advanced) โ€‹

  • Track behavioral patterns (not content):
    • Typing speed and rhythm (keystroke dynamics)
    • Wave sending patterns (time of day, frequency, preferred venues)
    • Navigation patterns (how user moves through app)
    • Interaction timing (time between actions)
  • ML model detects similarity to banned user's behavioral signature
  • High match (>80% confidence) โ†’ Flag for manual review
  • Limitation: Requires significant data, privacy implications

Layer 4: Social Graph Analysis (Advanced) โ€‹

  • Track connection patterns:
    • Which users wave to which other users (anonymized graph)
    • Venue check-in patterns (same venues at same times)
    • Mutual connections between accounts
  • If new account immediately connects to same users as banned account โ†’ Flag
  • Example: Banned User A always waved to Users B, C, D. New account waves to B, C, D within 24 hours โ†’ Suspicious
  • Limitation: False positives if friend groups use app together

Layer 5: Payment Information (If Applicable) โ€‹

  • If Lantern monetizes (premium features, merchant payments):
    • Ban extends to payment methods (credit card, PayPal, Venmo)
    • Billing address, payment processor fingerprint
  • New account using same payment method โ†’ Automatic ban
  • Limitation: Only applies if user makes payments

Implementation Strategy (Privacy-Preserving) โ€‹

Minimal PII Collection:

  • Do NOT collect: Full name, birth date (beyond 18+ verification), photo, SSN
  • DO collect (for ban enforcement only):
    • Hashed email (bcrypt, cannot reverse)
    • Hashed phone (if provided)
    • Device fingerprint (browser/device metadata)
    • Installation ID (randomly generated UUID)
    • Behavioral patterns (anonymized, aggregated)

Ban Database (Separate from User Data):

javascript
// Collection: banned_accounts
{
  id: "ban123",
  reason: "harassment", // Policy violation type
  severity: "permanent", // warning, temporary, shadow, permanent
  bannedAt: Timestamp,
  expiresAt: Timestamp or null, // null for permanent
  evidence: "encrypted_reference_to_logs", // Not plaintext
  appealStatus: "pending" | "upheld" | "overturned",
  
  // Identifiers (hashed for privacy)
  emailHash: "bcrypt_hash_of_email",
  phoneHash: "bcrypt_hash_of_phone",
  deviceFingerprint: "browser_fingerprint_hash",
  installationId: "uuid",
  
  // Behavioral signatures (anonymized)
  behavioralSignature: {
    typingSpeed: 120, // WPM average
    wavingPattern: [9, 14, 22], // Hours of day (0-23)
    preferredVenues: ["venue_type_coffee", "venue_type_bar"] // Categories, not specific venues
  }
}

Account Creation Check (On Signup):

  1. User enters email โ†’ Hash email โ†’ Check banned_accounts collection
  2. If email hash matches โ†’ "This email cannot be used" (don't reveal ban)
  3. If phone provided โ†’ Hash phone โ†’ Check for match
  4. Generate device fingerprint โ†’ Check for match
  5. If device matches banned account โ†’ Flag for manual review (don't block immediately to avoid false positives)
  6. After account creation, monitor behavioral patterns for first 7 days
  7. If behavior matches banned user signature โ†’ Shadow ban + manual review

False Positive Mitigation:

  • Device fingerprint matches alone should NOT auto-ban (shared devices, internet cafes)
  • Require 2+ signals to flag: (Email hash + device) OR (Device + behavioral pattern)
  • Manual review required before permanent ban
  • Users can appeal if falsely flagged

GDPR/CCPA Compliance:

  • Banned users have right to know what data is retained
  • Ban enforcement data retained for 1 year (fraud prevention legitimate interest)
  • After 1 year, hashed identifiers deleted (unless active appeals or legal hold)
  • User can request "right to erasure" but ban remains in effect

Transparency:

  • Publish aggregate ban statistics quarterly:
    • Total bans issued (by severity level)
    • Most common ban reasons
    • Ban appeal success rate
    • Average ban duration
  • Do NOT publish individual ban details (privacy)

Community Guidelines Enforcement:

  • Clear, public community guidelines (what behaviors are prohibited)
  • Examples of violations with severity levels
  • Users acknowledge guidelines during signup
  • Guidelines linked in ban notifications

Testing & Monitoring โ€‹

Effectiveness Metrics:

  • Ban evasion rate: % of banned users who successfully create new accounts
  • Time to re-ban: How long before evading account is detected and re-banned
  • False positive rate: % of legitimate users flagged as ban evaders
  • Appeal overturn rate: % of bans overturned on appeal (should be low, indicates good moderation)

Red Flags (Continuous Monitoring):

  • Sudden spike in new account creations (possible coordinated ban evasion)
  • Multiple accounts from same device fingerprint (shared device or evader)
  • New accounts immediately connecting to same users as banned accounts

Encryption & Security Implementation โ€‹

Critical Question: How do we ensure email, phone, and device fingerprint data is properly encrypted/hashed and cannot be reversed?

Hashing vs Encryption: What We Use Where โ€‹

Hashing (One-Way, Cannot Reverse):

  • Use for: Email addresses, phone numbers in ban database
  • Algorithm: bcrypt with salt (industry standard, computationally expensive to crack)
  • Why: Even if database is compromised, attacker cannot recover original emails/phones
  • Trade-off: Cannot retrieve original value (acceptable for ban enforcement)

Encryption (Two-Way, Can Decrypt with Key):

  • Use for: Safe space contact info, SOS recordings, emergency contact details
  • Algorithm: AES-256-GCM (symmetric encryption with authenticated encryption)
  • Why: Legitimate use requires decryption (e.g., calling safe space during SOS)
  • Key Management: Encryption keys stored in Firebase Secret Manager, never in code

Anonymization (No PII Storage):

  • Use for: Device fingerprints, behavioral patterns
  • Method: Hash browser metadata components, don't store raw user agent strings
  • Why: Even hashed, provides sufficient uniqueness for detection without PII

Implementation Details โ€‹

1. Email/Phone Hashing (Ban Database)

javascript
// src/lib/banService.js
import bcrypt from 'bcrypt'

// CORRECT: One-way hashing, cannot reverse
export async function hashEmailForBan(email) {
  // Normalize email (lowercase, trim whitespace)
  const normalized = email.toLowerCase().trim()
  
  // bcrypt with cost factor 10 (recommended for server-side)
  // Higher cost = more secure but slower (10 is good balance)
  const salt = await bcrypt.genSalt(10)
  const hash = await bcrypt.hash(normalized, salt)
  
  return hash
  // Output: "$2b$10$N9qo8uLOickgx2ZMRZoMye..." (60 characters)
  // Cannot reverse to get original email
}

// To check if email is banned:
export async function isEmailBanned(email) {
  const normalized = email.toLowerCase().trim()
  
  // Get all banned email hashes from Firestore
  const bannedEmailsSnapshot = await getDocs(
    query(collection(db, 'banned_accounts'))
  )
  
  // bcrypt.compare handles salt extraction and comparison
  for (const doc of bannedEmailsSnapshot.docs) {
    const bannedHash = doc.data().emailHash
    const matches = await bcrypt.compare(normalized, bannedHash)
    if (matches) return true
  }
  
  return false
}

Important: bcrypt automatically generates and embeds salt in the hash. Each call to bcrypt.hash() produces a different output even for the same input, but bcrypt.compare() still works correctly.

2. Device Fingerprint Anonymization

javascript
// src/lib/deviceFingerprint.js
import { createHash } from 'crypto'

export function generateDeviceFingerprint() {
  // Collect browser metadata (NOT PII)
  const components = {
    userAgent: navigator.userAgent,
    language: navigator.language,
    screenResolution: `${screen.width}x${screen.height}`,
    colorDepth: screen.colorDepth,
    timezone: Intl.DateTimeFormat().resolvedOptions().timeZone,
    platform: navigator.platform,
    hardwareConcurrency: navigator.hardwareConcurrency,
    // DO NOT include: IP address, GPS coordinates, cookies
  }
  
  // Hash components together (SHA-256)
  const componentsString = JSON.stringify(components)
  const hash = createHash('sha256')
    .update(componentsString)
    .digest('hex')
  
  return {
    fingerprint: hash, // Store this (64 hex characters)
    // DO NOT store raw components in database
  }
}

Why SHA-256 here instead of bcrypt?: Device fingerprints need fast comparison (checking thousands of fingerprints during login). SHA-256 is sufficient because attacker gaining hash cannot reverse it to get original browser metadata (which isn't PII anyway).

3. Safe Space Contact Encryption (Two-Way)

javascript
// src/lib/safeSpaceEncryption.js
import { initializeApp } from 'firebase/app'
import { getFunctions, httpsCallable } from 'firebase/functions'

// Encryption happens server-side in Cloud Function
// Key stored in Firebase Secret Manager (never exposed to client)

export async function encryptSafeSpaceContact(phoneNumber) {
  const functions = getFunctions()
  const encryptContactFn = httpsCallable(functions, 'encryptSafeSpaceContact')
  
  // Cloud Function encrypts with AES-256-GCM
  const result = await encryptContactFn({ phoneNumber })
  
  return result.data.encryptedPhone
  // Output: base64-encoded ciphertext + IV + auth tag
  // Example: "AQIDBAUGBwgJ...encrypted_data...auth_tag"
}

// Decryption only during active SOS
export async function decryptSafeSpaceContact(encryptedPhone, sosId) {
  const functions = getFunctions()
  const decryptContactFn = httpsCallable(functions, 'decryptSafeSpaceContact')
  
  // Cloud Function verifies SOS is active before decrypting
  const result = await decryptContactFn({ 
    encryptedPhone, 
    sosId // Proof of active emergency
  })
  
  return result.data.phoneNumber
}

Server-Side Cloud Function (Firebase Functions):

javascript
// functions/index.js
const functions = require('firebase-functions')
const crypto = require('crypto')
const { SecretManagerServiceClient } = require('@google-cloud/secret-manager')

// Encryption key stored in Secret Manager
async function getEncryptionKey() {
  const client = new SecretManagerServiceClient()
  const [version] = await client.accessSecretVersion({
    name: 'projects/PROJECT_ID/secrets/safe-space-encryption-key/versions/latest'
  })
  return version.payload.data.toString('utf8')
}

exports.encryptSafeSpaceContact = functions.https.onCall(async (data) => {
  const { phoneNumber } = data
  const key = await getEncryptionKey() // 256-bit key from Secret Manager
  
  // AES-256-GCM (authenticated encryption)
  const iv = crypto.randomBytes(16) // Initialization vector
  const cipher = crypto.createCipheriv('aes-256-gcm', Buffer.from(key, 'hex'), iv)
  
  let encrypted = cipher.update(phoneNumber, 'utf8', 'hex')
  encrypted += cipher.final('hex')
  const authTag = cipher.getAuthTag()
  
  // Combine IV + ciphertext + auth tag
  const result = iv.toString('hex') + encrypted + authTag.toString('hex')
  
  return { encryptedPhone: result }
})

exports.decryptSafeSpaceContact = functions.https.onCall(async (data, context) => {
  const { encryptedPhone, sosId } = data
  
  // CRITICAL: Verify SOS is actually active before decrypting
  const sosDoc = await admin.firestore().collection('sos_incidents').doc(sosId).get()
  if (!sosDoc.exists || sosDoc.data().status !== 'active') {
    throw new functions.https.HttpsError('permission-denied', 'SOS not active')
  }
  
  const key = await getEncryptionKey()
  
  // Extract IV, ciphertext, auth tag
  const iv = Buffer.from(encryptedPhone.substring(0, 32), 'hex')
  const authTag = Buffer.from(encryptedPhone.substring(encryptedPhone.length - 32), 'hex')
  const encrypted = encryptedPhone.substring(32, encryptedPhone.length - 32)
  
  // Decrypt
  const decipher = crypto.createDecipheriv('aes-256-gcm', Buffer.from(key, 'hex'), iv)
  decipher.setAuthTag(authTag)
  
  let decrypted = decipher.update(encrypted, 'hex', 'utf8')
  decrypted += decipher.final('utf8')
  
  return { phoneNumber: decrypted }
})

Firestore Security Rules (Prevent Data Leaks) โ€‹

javascript
// firestore.rules
rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    
    // Banned accounts: Only server can write, users cannot read
    match /banned_accounts/{banId} {
      allow read: if false; // No direct reads (prevents attackers from downloading ban list)
      allow write: if false; // Only Cloud Functions can write
    }
    
    // Safe spaces: Public can read location/hours, only server can read encrypted contacts
    match /safe_spaces/{spaceId} {
      allow read: if request.auth != null; // Authenticated users can see location
      allow write: if false; // Only Cloud Functions can write
      
      // Encrypted contact info stored in subcollection with stricter rules
      match /private/{doc} {
        allow read: if false; // Never readable by client
        allow write: if false; // Only server
      }
    }
    
    // Block list: Users can only read/write their own blocks
    match /blocks/{blockId} {
      allow read, write: if request.auth != null && 
        blockId.matches(request.auth.uid + '_.*');
    }
  }
}

Key Security Verification Checklist โ€‹

Before Production Launch:

  • [ ] Code Review: Security team reviews all encryption code
  • [ ] Penetration Testing: External security firm attempts to reverse hashes
  • [ ] Key Rotation: Encryption keys rotated every 90 days (automated via Secret Manager)
  • [ ] Firestore Rules Audit: Run Firebase emulator with test suite attempting unauthorized reads
  • [ ] Dependency Scanning: Ensure bcrypt, crypto libraries are latest patched versions
  • [ ] Logging Audit: Verify no plaintext emails/phones logged in application logs
  • [ ] Database Dump Test: Export Firestore data, confirm no reversible PII present
  • [ ] Cloud Function Security: Ensure functions verify authentication/authorization before decryption

Monitoring & Alerts:

javascript
// functions/monitoring.js
// Alert if suspicious decryption patterns detected
exports.monitorDecryption = functions.firestore
  .document('safe_spaces/{spaceId}/private/{docId}')
  .onUpdate(async (change, context) => {
    // Track decryption frequency
    const decryptionCount = change.after.data().decryptionCount || 0
    
    if (decryptionCount > 100 in 24 hours) {
      // Alert security team: possible key compromise
      await sendSecurityAlert('High decryption volume detected')
    }
  })

How to Verify Encryption is Working โ€‹

Developer Testing:

  1. Hash Verification Test:
javascript
// test/banService.test.js
import { hashEmailForBan, isEmailBanned } from '../lib/banService'

test('hashed email cannot be reversed', async () => {
  const email = 'test@example.com'
  const hash = await hashEmailForBan(email)
  
  // Hash should NOT contain original email
  expect(hash).not.toContain('test')
  expect(hash).not.toContain('example')
  
  // Hash should be bcrypt format ($2b$...)
  expect(hash).toMatch(/^\$2b\$/)
  
  // Same email hashed twice produces different hashes (salt)
  const hash2 = await hashEmailForBan(email)
  expect(hash).not.toBe(hash2)
  
  // But comparison still works
  expect(await isEmailBanned(email)).toBe(true) // if email is in ban DB
})
  1. Encryption Roundtrip Test:
javascript
// test/safeSpaceEncryption.test.js
test('encrypted phone can be decrypted only with active SOS', async () => {
  const phone = '+1-555-123-4567'
  const encrypted = await encryptSafeSpaceContact(phone)
  
  // Encrypted should NOT contain original phone
  expect(encrypted).not.toContain('555')
  expect(encrypted).not.toContain('123')
  
  // Decryption without active SOS should fail
  await expect(
    decryptSafeSpaceContact(encrypted, 'invalid-sos-id')
  ).rejects.toThrow('SOS not active')
  
  // Create active SOS
  const sosId = await activateSOS(userId, location)
  
  // Decryption with active SOS should succeed
  const decrypted = await decryptSafeSpaceContact(encrypted, sosId)
  expect(decrypted).toBe(phone)
})
  1. Firestore Rules Test:
javascript
// test/firestore.rules.test.js
import { initializeTestEnvironment } from '@firebase/rules-unit-testing'

test('users cannot read banned_accounts collection', async () => {
  const testEnv = await initializeTestEnvironment({...})
  const alice = testEnv.authenticatedContext('alice')
  
  await assertFails(
    alice.firestore().collection('banned_accounts').get()
  )
})

test('users cannot read safe_spaces/private subcollection', async () => {
  const testEnv = await initializeTestEnvironment({...})
  const alice = testEnv.authenticatedContext('alice')
  
  await assertFails(
    alice.firestore()
      .collection('safe_spaces/space123/private')
      .get()
  )
})

Production Monitoring:

  1. Database Export Audit: Monthly export of Firestore, scan for any plaintext PII
  2. Log Analysis: Automated scanning of application logs for regex patterns matching emails/phones
  3. Dependency CVE Scanning: GitHub Dependabot alerts for bcrypt/crypto vulnerabilities
  4. Key Rotation Verification: Automated test that old keys can no longer decrypt new data

Privacy Impact Assessment โ€‹

Question: "How can we be sure this info is encrypted?"

Answer: Multi-layer verification approach:

  1. Code-Level: Encryption/hashing implemented in dedicated, tested modules
  2. Database-Level: Firestore rules prevent direct client reads of sensitive collections
  3. Infrastructure-Level: Encryption keys stored in Secret Manager, never in code/env files
  4. Testing-Level: Automated tests verify hashes cannot be reversed, encrypted data cannot be decrypted without authorization
  5. Audit-Level: External security firm reviews implementation before production launch
  6. Monitoring-Level: Continuous monitoring for suspicious decryption patterns or PII leaks in logs

Transparency for Users: Document encryption in User Security Guide:

  • "Email addresses used for ban enforcement are hashed with bcrypt and cannot be reversed, even by Lantern staff"
  • "Safe space contact information is encrypted with AES-256-GCM and can only be decrypted during active SOS emergencies"
  • "Device fingerprints are anonymized hashes of browser metadata, not personally identifiable information"

Feature 2: SOS Emergency System โ€‹

Purpose โ€‹

Provide immediate emergency assistance when users feel unsafe, including audio/video recording, location sharing with authorities, and rapid response coordination.

Design Requirements โ€‹

User Experience:

  • Prominent, always-accessible SOS button (red, distinctive, never hidden)
  • Two-stage activation to prevent accidental triggers:
    1. Tap SOS โ†’ Confirmation screen with "Activate Emergency" button
    2. Tap "Activate Emergency" โ†’ Immediate activation
  • Alternative: Long-press SOS button (3 seconds) to bypass confirmation
  • Visual/haptic feedback when emergency is activated
  • Persistent notification showing emergency is active (cannot be dismissed until deactivated)

Emergency Actions (Activated Simultaneously):

  1. Audio/Video Recording:

    • Begin recording immediately using device camera/microphone
    • Save recording to encrypted local storage (device-only, not uploaded automatically)
    • Continue recording until user manually stops or 15-minute limit reached
    • Auto-upload to secure Lantern servers when device reconnects to WiFi (if user consents)
    • User can review and delete recording post-incident
  2. Location Sharing:

    • Send precise GPS coordinates to emergency contacts
    • Share real-time location updates every 30 seconds
    • Show current venue (if user is at a Lantern partner venue)
    • Location data encrypted and time-limited (deleted after 7 days unless incident reported)
  3. Emergency Contacts Notification:

    • SMS/push notification to user-designated emergency contacts
    • Message: "[User's Real Name] activated Lantern SOS at [Location]. Live location: [Link]"
    • Emergency contacts can view live location tracking via secure web link (no app required)
  4. Safe Spaces Alert (if nearby):

    • Identify nearest Lantern partner safe space within 5 miles
    • Send alert to safe space staff with user's approximate location and emergency status
    • Display directions to safe space on user's screen
    • Safe spaces include: LGBTQ centers, domestic abuse shelters, partner venues with trained staff
  5. Law Enforcement Integration (Opt-In):

    • User can enable auto-911 dial when SOS is activated (disabled by default)
    • Send emergency alert to local police with location and Lantern incident ID
    • Critical: Respect user preference - some communities distrust police (LGBTQ+, POC, immigrants)
    • Always prioritize safe spaces and community resources over law enforcement

Deactivation:

  • User can manually deactivate SOS from persistent notification
  • Require PIN/passphrase to deactivate (prevent abuser from stopping it)
  • Send "SOS Deactivated" message to emergency contacts when stopped
  • Prompt user to submit incident report (optional, helps improve safety features)

Implementation Plan (TODO) โ€‹

javascript
// TODO: Implement SOS Service (src/lib/sosService.js)
// - activateSOS(userId, location)
// - deactivateSOS(userId, pin)
// - subscribeToSOSStatus(userId, callback)
// - submitIncidentReport(userId, sosId, details)
// - getSOSRecording(sosId)
// - uploadSOSRecording(sosId, recordingBlob)

// TODO: Emergency Contacts Management (src/lib/emergencyContacts.js)
// - addEmergencyContact(userId, contact: {name, phone, email, relationship})
// - removeEmergencyContact(userId, contactId)
// - getEmergencyContacts(userId)
// - notifyEmergencyContacts(userId, sosId, location)

// TODO: Recording Service (src/lib/recordingService.js)
// - startRecording(sosId) โ†’ mediaRecorder instance
// - stopRecording(mediaRecorder) โ†’ recordingBlob
// - saveRecordingLocally(sosId, recordingBlob)
// - uploadRecordingSecure(sosId, recordingBlob, encryptionKey)
// - getRecordingMetadata(sosId) โ†’ {duration, size, uploadStatus}

// TODO: Firestore Data Model
// Collection: sos_incidents
// Document: {sosId}
// Fields:
//   - userId: string
//   - status: 'active' | 'resolved' | 'cancelled'
//   - activatedAt: timestamp
//   - deactivatedAt?: timestamp
//   - location: geopoint
//   - venue?: string (if at partner venue)
//   - recordingUrl?: string (encrypted URL)
//   - incidentReport?: string
//   - notifiedContacts: array of contact IDs
//   - notifiedSafeSpaces: array of safe space IDs

// TODO: UI Components
// - SOSButton component (red, prominent, always visible)
// - SOSActivationModal (confirmation screen)
// - SOSActiveNotification (persistent, cannot dismiss)
// - SOSRecordingIndicator (shows recording is active)
// - EmergencyContactsManager (ProfileSettings integration)
// - IncidentReportForm (post-deactivation)

// TODO: Backend Cloud Functions
// - onSOSActivated(): Trigger emergency contact notifications, alert safe spaces
// - onSOSDeactivated(): Send resolution notifications, prompt incident report
// - monitorSOSStatus(): Auto-escalate if SOS active >30 minutes without deactivation

Integration Points:

  • Dashboard.jsx: Add SOS button to main navigation (always visible, high priority)
  • DashboardNavigation component: Add SOS as first navigation item
  • ProfileSettings.jsx: Add "Emergency Contacts" section
  • Native APIs: Request microphone/camera permissions at first SOS activation

Privacy Safeguards:

  • SOS recordings stored encrypted, only decryptable with user's passphrase
  • User can delete recordings at any time (exception: if subpoenaed in criminal case)
  • Emergency contacts CANNOT access recordings, only location data
  • Location sharing stops immediately when SOS is deactivated
  • All SOS data auto-deleted after 30 days unless user marks as evidence

Legal Considerations:

  • Terms of Service must disclose SOS recording functionality
  • Recording consent laws vary by state and jurisdiction:
    • One-party consent states: Legal in most US states (as of 2025, verify current laws in your jurisdiction)
    • Two-party consent states: Require explicit opt-in with legal disclaimer
    • Note: Recording laws change frequently - consult legal counsel before implementation
  • International markets: Comply with local recording laws (GDPR, CCPA, etc.)
  • Recommendation: Display clear consent language during SOS setup explaining recording will begin on activation

Feature 3: Safe Spaces Partnership Program โ€‹

Purpose โ€‹

Partner with community organizations to provide physical safe havens and support resources beyond traditional law enforcement. Prioritize marginalized communities who may not feel safe contacting police.

Partner Types โ€‹

1. LGBTQ+ Centers & Community Spaces:

  • Drop-in centers offering safe space, resources, and peer support
  • Trans-specific safe houses and shelters
  • LGBTQ+ bars/venues with trained staff and safety protocols
  • Youth-focused LGBTQ+ organizations (for 18-21 age group)

2. Domestic Violence & Abuse Shelters:

  • Women's shelters with 24/7 crisis intervention
  • Family violence centers with emergency housing
  • Rape crisis centers with counseling and advocacy
  • Human trafficking support organizations

3. Mental Health Crisis Centers:

  • Suicide prevention hotlines and walk-in centers
  • Mental health first aid locations
  • Peer support networks for substance abuse recovery

4. Community Safety Organizations:

  • Neighborhood watch groups with trained volunteers
  • Community violence intervention programs
  • Street medic collectives (for protests, high-risk areas)
  • Harm reduction organizations (needle exchange, overdose prevention)

5. Religious & Cultural Community Centers:

  • Churches, mosques, temples offering sanctuary
  • Cultural centers serving immigrant communities
  • Mutual aid networks and community fridges

Partnership Requirements โ€‹

Venue Qualifications:

  • Physical location with regular operating hours
  • Trained staff on trauma-informed care and de-escalation
  • Established track record serving vulnerable populations
  • Willingness to respond to Lantern SOS alerts (opt-in)
  • Privacy policy compatible with Lantern's zero-knowledge architecture

Lantern Integration:

  • Safe space location added to Firestore venues collection
  • Custom venue type: safeSpace (distinct from bars, cafes, etc.)
  • Safe space is NOT shown in public venue lists (only during SOS or user search)
  • SOS alerts sent via SMS/email to designated safe space contact
  • Alert includes: approximate location of user in distress, distance from safe space, SOS incident ID

Funding Mechanism (Company Commitment):

  • Lantern commits to donate a percentage of revenue to partner safe spaces
  • See FUND_ALLOCATION.md for complete funding framework
  • Constitutional minimum: 2% of gross revenue OR 1,000/month (whichever is higher)
  • Tiered funding model:
    • Tier 1 (24/7 crisis centers): 500-1,000/month per partner
    • Tier 2 (Drop-in centers with limited hours): 250-500/month per partner
    • Tier 3 (On-call volunteer networks): 100-250/month per partner
    • Tier 4 (Educational partners): 50-100/month per partner
  • Transparent quarterly reports showing total donations and recipient organizations
  • User-directed donations: Users can optionally donate directly to specific safe spaces
  • Matching program: Lantern may match user donations (details TBD based on financial position)

Transparency:

  • Publicly list all partner safe spaces on Lantern website (with their consent)
  • Publish annual transparency report showing:
    • Total SOS activations per city
    • Number of users who accessed safe spaces
    • Total funding provided to partner organizations
    • Anonymized success stories (with user consent)

Implementation Plan (TODO) โ€‹

javascript
// TODO: Safe Spaces Service (src/lib/safeSpacesService.js)
// - getNearestSafeSpaces(location, radius, type?)
// - alertSafeSpace(safeSpaceId, sosId, userLocation)
// - getSafeSpaceDetails(safeSpaceId)
// - subscribeToSafeSpaceUpdates(callback)
// - reportSafeSpaceExperience(userId, safeSpaceId, rating, feedback)

// TODO: Firestore Data Model Extension
// Collection: safe_spaces
// Document: {safeSpaceId}
// Fields:
//   - name: string
//   - type: 'lgbtq_center' | 'domestic_violence_shelter' | 'mental_health' | 'community' | 'religious'
//   - address: string
//   - location: geopoint
//   - contactPhone: string (encrypted, only revealed during SOS)
//   - contactEmail: string
//   - operatingHours: object
//   - services: array ['crisis_intervention', 'emergency_housing', 'counseling', etc.]
//   - acceptsSOSAlerts: boolean
//   - publiclyListed: boolean (some shelters require confidential addresses)
//   - fundingTier: 1 | 2 | 3
//   - lastAlertedAt?: timestamp
//   - totalAlertsReceived: number
//   - avgResponseTime: number (minutes)

// TODO: Funding Tracking
// Collection: safe_space_funding
// Document: {year}_{month}
// Fields:
//   - month: string (YYYY-MM)
//   - totalRevenue: number (Lantern's monthly revenue)
//   - fundingPercentage: number (default: 5%)
//   - totalDonated: number
//   - donations: array [
//       {safeSpaceId, amount, tier, timestamp}
//     ]
//   - userDonations: number (user-directed donations)
//   - lanternMatching: number (matching contributions)

Integration Points:

  • SOS activation flow: Automatically identify and alert nearest safe space
  • Venue search: Add "Safe Spaces" filter (show distance, operating hours, services)
  • ProfileSettings.jsx: Add "Preferred Safe Spaces" (user can pre-select for faster SOS response)
  • Dashboard: Show safe space map layer (opt-in, disabled by default to respect confidential addresses)

Partnership Onboarding Process:

  1. Organization applies via Lantern website form
  2. Lantern team verifies organization legitimacy (501(c)(3) status, references, background checks)
  3. Sign partnership agreement (liability, privacy, alert response SLAs)
  4. Onboarding training: How to respond to Lantern SOS alerts, user privacy expectations
  5. Add safe space to Firestore with encrypted contact details
  6. Monthly check-ins to review SOS activity and funding

Feature 4: Additional Safety Considerations โ€‹

In-App Safety Education โ€‹

Safety Tips Panel:

  • Accessible from ProfileSettings โ†’ Safety & Privacy โ†’ Safety Tips
  • Topics:
    • Meeting strangers safely (public places, tell a friend, trust your instincts)
    • Recognizing red flags (pushy behavior, asking for money, oversharing personal info)
    • Using Lantern's safety features (block, report, SOS)
    • Alcohol safety (watch your drink, pace yourself, have an exit plan)
    • Digital safety (don't share passwords, beware of phishing, protect your passphrase)

Contextual Safety Prompts:

  • Before first lantern light: "Safety Reminder: Only meet in public, tell a friend where you're going"
  • Before accepting first wave: "You can block or report this user at any time"
  • When scheduling late-night light (after 10pm): "Stay safe! Consider bringing a friend or sharing your plans"

Resource Links:

  • National Domestic Violence Hotline: 1-800-799-7233
  • Trans Lifeline: 1-877-565-8860
  • RAINN Sexual Assault Hotline: 1-800-656-4673
  • Suicide Prevention Lifeline: 988
  • Crisis Text Line: Text HOME to 741741

Venue Partnership Safety Protocols โ€‹

Venue Safety Standards:

  • Partner venues agree to safety training for staff:
    • Recognizing signs of distress or discomfort
    • De-escalation techniques for conflicts
    • How to respond to Lantern SOS alerts from patrons
    • Trauma-informed customer service

Venue Safety Badges (displayed in app):

  • "Lantern Verified Safe Space": Venue has completed safety training
  • "24/7 Security": Venue has on-site security staff
  • "LGBTQ+ Friendly": Venue has LGBTQ+ inclusion policies
  • "Accessibility": Venue is wheelchair accessible with gender-neutral bathrooms

Incident Reporting:

  • Users can report unsafe experiences at venues
  • Venue safety ratings aggregated from user reports
  • Venues with repeated safety violations removed from platform
  • Anonymized safety reports shared with venue management

Proactive Abuse Detection โ€‹

Behavioral Pattern Monitoring (Privacy-Preserving):

  • Track anonymized metrics (not individual users):
    • Average block rate per user demographic
    • Time-of-day patterns for SOS activations
    • Geographic clustering of safety incidents
  • Flag accounts with suspicious patterns:
    • Receives 5+ blocks in 30 days โ†’ manual review
    • Sends 50+ waves per day โ†’ rate limit + review
    • Repeatedly reported for harassment โ†’ shadow ban

Content Moderation:

  • Encrypted chat is NOT scanned (zero-knowledge architecture)
  • User can report specific messages โ†’ manual human review
  • Reported content stored as evidence (encrypted, only reviewable with user consent)
  • Repeat offenders permanently banned

Machine Learning Abuse Detection (Future):

  • Train ML model on anonymized, consented incident reports
  • Detect early warning signs of predatory behavior
  • Predictive blocking: Suggest users pre-emptively block high-risk accounts
  • Critical: Ensure ML does not perpetuate bias (e.g., over-flagging BIPOC users)

Accessibility Considerations โ€‹

Safety Features for Disabled Users:

  • SOS button compatible with screen readers (VoiceOver, TalkBack)
  • Voice-activated SOS: "Hey Lantern, activate SOS" (for users with limited mobility)
  • Visual SOS mode: Silent flashing screen for deaf/hard-of-hearing users
  • Integration with medical alert systems (future)

Language & Literacy:

  • Safety tips available in multiple languages (Spanish, Mandarin, Arabic, etc.)
  • Plain language explanations (avoid jargon, use simple sentences)
  • Video tutorials for users with low literacy

Data & Privacy Considerations โ€‹

User Data Collection for Safety:

  • SOS incidents: Location, timestamps, recording metadata (NOT recording content unless uploaded)
  • Block actions: Who blocked whom, when, optional reason
  • Incident reports: User-submitted narratives, screenshots, evidence
  • Safe space interactions: Which safe spaces were alerted, response times

Data Retention:

  • SOS data: 30 days (auto-delete unless user marks as evidence or involved in legal case)
  • Block list: Permanent until user unblocks
  • Incident reports: 1 year (for pattern detection and moderation)
  • Safe space alerts: 90 days (for partnership evaluation)

Data Access:

  • User: Full access to their own safety data (download, delete)
  • Lantern staff: Encrypted data, only decryptable for active moderation cases
  • Law enforcement: Only with valid subpoena/warrant, and user is notified (unless gag order)
  • Safe spaces: Only approximate user location during active SOS (no PII)

Ethical Commitments:

  • Never sell or share safety data with third parties
  • Never use safety data for advertising or user profiling
  • Transparent annual reports on SOS usage, safe space partnerships, moderation actions
  • Independent audit of safety feature effectiveness (annual)

Implementation Roadmap โ€‹

Phase 1: Block Functionality (MVP) โ€‹

Timeline: 4-6 weeks
Priority: HIGH (foundational safety feature)

  • [ ] Implement block service and Firestore data model
  • [ ] Add block buttons to Dashboard, Chat, and wave notifications
  • [ ] Build BlockedUsersList component in ProfileSettings
  • [ ] Firestore security rules for block privacy
  • [ ] Cross-device block list sync
  • [ ] Test: Block user โ†’ verify bidirectional invisibility

Phase 2: In-App Safety Education โ€‹

Timeline: 2-3 weeks
Priority: HIGH (easy win, low development effort)

  • [ ] Create SafetyTipsPanel component
  • [ ] Add contextual safety prompts to lantern lighting and wave acceptance
  • [ ] Link to crisis hotlines and resources
  • [ ] Multilingual safety tips (Spanish, Mandarin for San Diego pilot)
  • [ ] Test: Verify tips display at appropriate moments

Phase 3: Safe Spaces Partnership Program โ€‹

Timeline: 8-12 weeks (includes outreach and onboarding)
Priority: MEDIUM (requires external partnerships)

  • [ ] Design partnership application and vetting process
  • [ ] Create legal agreements and liability waivers
  • [ ] Build safe spaces Firestore data model
  • [ ] Identify and onboard 5-10 safe spaces in San Diego (pilot city)
  • [ ] Implement safe space search and map view
  • [ ] Set up funding tracking and donation mechanism
  • [ ] Test: User searches for safe space โ†’ finds nearest option with directions

Phase 4: SOS Emergency System โ€‹

Timeline: 12-16 weeks
Priority: MEDIUM-HIGH (complex, high impact)

  • [ ] Implement recording service (audio/video)
  • [ ] Build emergency contacts management
  • [ ] Create SOS activation and deactivation flows
  • [ ] Integrate safe space alerting with Phase 3 partnerships
  • [ ] Secure recording storage with encryption
  • [ ] Test: Activate SOS โ†’ verify recording, location sharing, contact notifications
  • [ ] Legal review: Recording consent, law enforcement integration

Phase 5: Advanced Safety Features โ€‹

Timeline: 16-20 weeks (ongoing)
Priority: LOW (future enhancements)

  • [ ] Implement pattern detection for abuse
  • [ ] Build ML-powered abuse prediction models
  • [ ] Add voice-activated SOS for accessibility
  • [ ] Integrate with medical alert systems
  • [ ] Conduct external security and safety audit
  • [ ] Launch bug bounty program for safety vulnerabilities

Success Metrics โ€‹

User Safety:

  • Reduction in harassment reports after block feature launch (target: -30%)
  • SOS activation rate (baseline: unknown, monitor for trends)
  • Safe space utilization rate (target: 10% of SOS activations result in safe space visit)
  • User retention among marginalized groups (LGBTQ+, women) after safety features launch

Community Impact:

  • Number of partner safe spaces (target: 50+ nationwide within 1 year)
  • Total funding provided to safe spaces (target: $100k+ in year 1)
  • User satisfaction with safety features (target: 4.5/5 stars)
  • Media coverage highlighting Lantern's safety-first approach

Operational Metrics:

  • Block action volume (monitor for abuse of block feature)
  • Average SOS resolution time (target: <5 minutes from activation to deactivation)
  • False positive SOS rate (accidental activations, target: <5%)
  • Safe space response time (target: <10 minutes to acknowledge SOS alert)

Open Questions & Future Research โ€‹

  1. SOS False Positives: How to reduce accidental SOS activations without making it harder to use in real emergencies? (Potential: Machine learning to detect distress signals from motion/audio patterns)

  2. Safe Space Coverage: What to do in rural areas with no nearby safe spaces? (Potential: Virtual safe space via video call with trained counselor)

  3. Law Enforcement Relationships: How to balance user safety with distrust of police in marginalized communities? (Potential: User-controlled opt-in for police alerts, prioritize community resources first)

  4. International Expansion: How to adapt safety features for different legal frameworks and cultural contexts? (Research needed for each new market)

  5. Economic Sustainability: Can funding model support safe spaces long-term as Lantern scales? (Potential: Increase percentage of revenue donated as company grows, seek grant funding)

  6. Bystander Intervention: How to empower other Lantern users to help during emergencies without violating privacy? (Potential: "Safety Ally" feature where users opt-in to receive nearby SOS alerts)

  7. Trauma-Informed Design: Are we designing safety features in ways that don't re-traumatize survivors? (Potential: Partner with trauma specialists to review UX/UI)



Document Status: Draft (awaiting stakeholder review)
Last Updated: 2026-01-09
Owner: Lantern Safety Team
Reviewers Needed: Product, Engineering, Legal, Community Partners

Built with VitePress