NWS-360 NWS-360

Service Owner Assessments

Draft deliverable specs for 1:1 review with each service owner — what we think, what we need them to confirm.

Date: April 4, 2026 Owner: Patrick Bieser Purpose: Bring to each 1:1 — capture what they can deliver, what help they need, and timeline
How to use this document: Each assessment below is a draft based on what we know from the project plan. Bring the relevant section to your 1:1 with each service owner. The green sections are pre-filled with our best guess. The orange dashed sections are what they need to tell you. The goal of each meeting is to answer: What can you deliver, can you build it (or do you need help), and how long will it take?

Assessments

SEO Health Check
Category: SEO • Cadence: Monthly • Also covers: SEO Performance Analysis
Alex
TierLabelDraft ScopeDelivery
S — Quick Scan Homepage SEO flags Index permission, meta description present, H1 structure, descriptive link text, robots.txt check, sitemap presence Agent → automated report
M — Site Scan Full site SEO health report Full crawl: all pages checked for meta, headings, link text, duplicate content, thin pages, canonical tags, structured data Agent + Alex review?
L — Audit Technical + on-page SEO audit Full crawl + keyword gap analysis + competitor comparison + prioritized remediation plan Alex-led with agent assist
XL — Strategy Audit + keyword + content strategy Everything in L + content calendar + keyword targeting roadmap + quarterly check-ins Alex + Ed K
Technical SEO health score (0–100) • Goal: >80, 0 critical issues
Baseline captures: Health score, pages indexed vs. submitted, critical SEO errors (crawl blocks, missing meta, duplicate content)
BDM talking point: “Do you know if Google can actually find and index all the important pages on your site?”
1. Quick Scan scope — is this right?
Review the S-tier scope above. What checks should be in the free quick scan? What’s missing? What should be removed?
Alex’s notes:
2. Can you build the quick scan agent?
Could you build (or help build) an automated agent that runs the S-tier checks against any URL and produces a report?
  • Yes — I can build it myself
  • Yes with help — I know what it should check, but need dev help to build the agent
  • No — this needs to be human-only for now
If you need help, what kind? (dev to write the code, someone to set up the tool, training on agent building, etc.)
Alex’s notes:
3. Human vs. agent at each tier
For each tier, what’s realistic?
  • S — Fully agentic (no human needed)
  • S — Agent generates, human reviews before sending
  • M — Fully agentic
  • M — Agent generates, Alex reviews
  • M — Mostly human, agent assists
  • L — Alex-led, agent does data gathering
  • L — Fully human
  • XL — Fully human (strategy work)
4. Effort and timeline
How long to get the S-tier quick scan working? How about producing a first M-tier report for a real client?
S-tier estimate:
M-tier estimate:
What’s your availability? Are there other commitments that would affect this?
Availability notes:
5. What tools do you use today?
What SEO tools do you currently have access to? (Screaming Frog, Ahrefs, SEMrush, Moz, Google Search Console, etc.) Which would the agent need access to?
Current tools:
Accessibility (WCAG 2.2)
Category: Compliance • Cadence: Quarterly
Sydney
TierLabelDraft ScopeDelivery
S — Quick Scan Automated WCAG check — homepage axe-core or similar automated scan of homepage: violation count by severity (critical/serious/moderate/minor), compliance level (A/AA/AAA), top 5 issues with fix suggestions Agent → automated report
M — Site Scan Automated scan — full site Automated scan across all pages (or top 50): aggregate violation counts, worst-offending pages, trend vs. previous scan Agent + Sydney review?
L — Full Audit Manual + automated audit Automated scan + manual screen reader testing + keyboard navigation + focus order + color contrast review + ARIA audit Sydney-led with agent assist
XL — Audit + Remediation Audit + fix implementation Everything in L + Sydney implements the fixes in the codebase Fully human
WCAG 2.2 violation count • Goal: 0 critical violations, AA compliant
Baseline captures: Automated violation count by severity, current compliance level, pages with critical issues
BDM talking point: “Web accessibility lawsuits are up 300%. Do you know your current compliance level?”
1. Quick Scan scope — is this right?
We’re thinking axe-core against the homepage. Is that the right engine? What checks should the free quick scan cover? What would you add or remove?
Sydney’s notes:
2. Can you build the quick scan agent?
You’re both the accessibility specialist and a front-end developer. Could you build the automated scan agent, or do you need help?
  • Yes — I can build it myself (I know axe-core / Playwright / Puppeteer)
  • Yes with help — I know what to check but need help with the agent framework
  • No — I can define the checks but someone else needs to build it
Sydney’s notes:
3. Human vs. agent at each tier
Automated tools catch ~30–40% of WCAG issues. For each tier, what’s the realistic split?
  • S — Fully agentic (automated scan, no human review)
  • S — Agent scan + Sydney reviews before sending
  • M — Fully agentic (multi-page automated scan)
  • M — Agent scans, Sydney adds manual notes on critical pages
  • L — Must be human-led (manual testing can’t be automated)
  • XL — Fully human (code changes required)
4. Effort and timeline
How long to get a working S-tier automated scan? How about a first real M-tier report?
S-tier estimate:
M-tier estimate:
What’s your availability? Front-end work vs. accessibility work — how do we balance?
Availability notes:
5. Tools and limitations
What accessibility tools do you use today? Are there any we’d need to license? What are the known limits of automated scanning that we should set expectations around?
Current tools:
Known limitations to communicate to clients:
Security & Pen Testing
Category: Technical • Cadence: Monthly • Eric also owns: CMP (complete), Error Logs
Eric
TierLabelDraft ScopeDelivery
S — Quick Scan HTTPS + library vulnerabilities HTTPS enforcement, outdated JS libraries (known CVEs), security headers check (CSP, X-Frame-Options, HSTS, etc.), mixed content detection Agent → automated report
M — Security Scan Automated vulnerability scan Full automated scan: OWASP Top 10 check, CMS/plugin version audit, exposed admin pages, directory listing, information leakage Agent + Eric review?
L — Pen Test Manual + automated pen test Automated scan + manual testing: authentication bypass, injection attempts, session management, API endpoint review Eric + Diamond + Brian
XL — Full Audit Pen test + remediation plan Everything in L + written remediation plan + implementation support Fully human
Open vulnerability count (critical/high/medium/low) • Goal: 0 critical/high unresolved
Baseline captures: Vulnerability scan results, outdated JS libraries, missing security headers, known CVEs on CMS/plugins
BDM talking point: “Are you running outdated JavaScript libraries right now? Most websites are — and attackers scan for them automatically.”
1. Quick Scan scope — is this right?
You already built the CMP scanner. For security, what should the free quick scan check? Is the draft scope above right?
Eric’s notes:
2. Can you build the quick scan agent?
Given your CMP agent experience, how hard is a security quick scan agent?
  • Yes — straightforward, similar to CMP scanner
  • Yes but harder — security scanning has more edge cases
  • Partially — some checks are easy to automate, others aren’t
  • Need Diamond or Brian for the pen test tiers
Eric’s notes:
3. Human vs. agent at each tier
Security is inherently high-stakes. Where does human judgment become non-negotiable?
  • S — Fully agentic (safe — just checking public signals)
  • S — Agent scans, Eric reviews before sending (liability concern)
  • M — Agent can do most checks, Eric validates findings
  • L — Must be human-led (pen testing requires judgment + authorization)
  • XL — Fully human
Are there legal or liability concerns with automated security scanning we need to address?
Eric’s notes:
4. Effort and timeline
S-tier estimate:
M-tier estimate:
You’re also the owner of Error Logs and CMP. How does your bandwidth split across the three services?
Availability notes:
5. Diamond and Brian’s roles
Diamond and Brian are listed on security. What’s the division of labor? At what tier do they get involved?
Eric’s notes:
Google Analytics / GA4
Category: Compliance • Cadence: Quarterly • Also involved: Trevor, Aaron
Fred
TierLabelDraft ScopeDelivery
S — Quick Scan GA tag present + firing Check if GA4 tag is present on homepage, verify it fires correctly, check for common misconfigurations (duplicate tags, missing measurement ID, consent mode conflicts) Agent → automated report
M — Setup Audit GA4 configuration audit Tag presence across all pages, conversion events configured, data stream setup, filters and exclusions, cross-domain tracking, consent mode integration Agent + Fred review?
L — Full Audit GA4 + GTM full audit Full GA4 audit + GTM container review: tag firing rules, data layer implementation, custom events, ecommerce tracking, attribution model Fred-led with agent assist
XL — Strategy Audit + reporting strategy + dashboards Everything in L + custom Looker Studio dashboards + KPI framework + quarterly reporting cadence Fred + Trevor
Tracking accuracy % • Goal: 100% of key user actions tracked, 0 data gaps
Baseline captures: GA4 tag present/firing, conversion events configured, data gaps, cross-domain tracking
BDM talking point: “How confident are you that your Google Analytics data is actually accurate right now?”
1. Quick Scan scope — is this right?
For a free quick scan, we’re thinking: check if GA4 tag is present, if it fires, and flag obvious misconfigurations. Is that achievable automatically? What would you add?
Fred’s notes:
2. Can you build the quick scan agent?
You’re also on the BDM team. Do you have the technical chops to build this, or do you need a developer?
  • Yes — I can build it (I know the GA4 API / tag inspection well enough)
  • I can spec it precisely — but need a dev to build the agent
  • Aaron or Trevor could help build it — they know the technical side
Fred’s notes:
3. Human vs. agent at each tier
GA4 auditing can be partially automated, but much of it requires looking at the client’s actual property. Where does automation stop being useful?
  • S — Fully agentic (tag detection is straightforward)
  • S — Agent scans, Fred reviews (edge cases with SPAs, consent mode)
  • M — Needs client GA4 access to be meaningful (can’t scan from outside)
  • L — Fully human (GTM container review requires account access)
  • XL — Fully human (strategy work)
Key question: Can we do anything useful at M-tier without the client granting us GA4 access? Or is M-tier only possible after onboarding?
Fred’s notes:
4. Effort and timeline
S-tier estimate:
M-tier estimate:
How does your BDM role affect your availability for this? Would Trevor or Aaron take the lead on building?
Availability notes:
5. Trevor and Aaron’s roles
Trevor is also on Digital Marketing. Aaron is also on CMP. How should GA4 work split across the three of you?
Fred’s notes:
Uptime & Availability
Category: Operations • Cadence: Daily • Greg also owns: SSL, CSP • Also involved: Diamond
Greg
TierLabelDraft ScopeDelivery
S — Quick Scan Homepage ping HTTP status check on homepage, response time measurement, SSL valid check (ties to SSL service) Agent → automated
M — Multi-page Monitor Key pages monitored Scheduled checks on 5–10 key pages, response time tracking, downtime event logging, basic alerting to AD Agent (continuous)
L — Full Monitor All pages + alerting Full site monitoring, multi-location checks, instant alerts (email + Slack?), downtime duration tracking, monthly uptime % report Greg’s infrastructure
XL — SLA Reporting Guaranteed uptime reporting Everything in L + SLA compliance reports, root cause analysis for downtime events, escalation procedures Greg + Diamond
Uptime percentage • Goal: >99.9% monthly
Baseline captures: Current uptime % (last 30 days), downtime events per month, longest single downtime event
BDM talking point: “What would one hour of downtime during your busiest period cost you?”
1. What do you already have?
You’re listed as running uptime monitoring infrastructure already. What exists today? What tool/system are you using? Can we build NWS-360 on top of it, or do we need something new?
Greg’s notes:
2. Notification strategy (open question)
This has been an open question since day one: when and how do clients get notified of downtime? Email? Slack? Dashboard only? What’s the threshold — any downtime, or only after X minutes?
Greg’s notes:
3. Quick scan vs. ongoing monitoring
Uptime is different from other services — the value is continuous monitoring, not a one-time scan. Does a free “quick scan” even make sense for uptime? Or is the S-tier just “we check if your site is up right now” as a lead-gen gimmick?
  • S-tier quick scan is useful — even a point-in-time check shows response time and basic health
  • S-tier is a gimmick — real value starts at M-tier continuous monitoring
  • Skip S-tier for uptime — bundle it as a freebie in the marketing funnel scan alongside other services
Greg’s notes:
4. Can this be agentic?
Uptime monitoring is inherently automated. Is this already agent-like, or does the NWS-360 agent framework add something your current system doesn’t have?
  • My current system already does this — we just need to pipe data to the NWS-360 portal
  • Current system is basic — NWS-360 agent could add reporting, alerting, analysis
  • Need to build from scratch for NWS-360
Greg’s notes:
5. SSL and CSP — bundled or separate?
You own three services: Uptime, SSL, and CSP. Do these share infrastructure? Should the quick scan bundle all three (uptime + SSL expiry + CSP header check) since they’re all your domain?
Greg’s notes:
6. Effort and timeline
Time to pipe existing monitoring into NWS-360 reporting:
Time to build SSL + CSP quick scans:
Diamond is listed on Uptime and SSL too. What’s the split between you two?
Availability notes:
Edit Mode Click any outlined text to edit.