Analyzing talent intelligence
Action completed!

Snapshot of open demand

126 open roles across 7 teams. Solutions and Engineering dominate, but the distribution tells a story about where Mistral is investing — and where job freshness is becoming a concern.

Open jobs
0
Avg age ≈ 150 days. Only 21% are fresh (<30 days).
Region mix
EMEA-heavy
78.6% EMEA, 12.7% Americas, 8.7% APAC. Paris alone = 68%.
Oldest job
0
Days. A 2.3-year-old AI Scientist role still live on the board.
Multi-location jobs
0
37% of roles span multiple cities — complicating recruiter assignment and reporting.

Where you are hiring

Solutions (pre-sales, implementation, customer success) drives ~30% of open demand. Engineering, Corporate, and Research each hold ~16-17%. This is the hiring profile of a company in GTM scale-up mode.

Team breakdown (all 126 roles)
Chart A
Solutions is your largest hiring function — a clear signal that Mistral is investing heavily in enterprise adoption, not just model R&D.
Solutions 38 · 30.2%
Engineering & Infra 22 · 17.5%
Corporate 22 · 17.5%
Research 20 · 15.9%
Business (Sales) 14 · 11.1%
Product 8 · 6.3%
  • 7 distinct teams in the Lever taxonomy — clean and well-organized.
  • Marketing has only 2 roles — either lean or potentially under-resourced for GTM scale.
Region breakdown (by country)
Chart B
78.6% of roles are EMEA — overwhelmingly France. Americas and APAC are expansion territories, not primary hiring hubs (yet).
EMEA (FR, UK, DE, LU, etc.) 99 · 78.6%
Americas (US, CA) 16 · 12.7%
APAC (SG, AU) 11 · 8.7%
  • 15 distinct location strings — from Paris to Casablanca to Sydney.
  • Solutions is the most globally distributed team; Research and Engineering are Paris-centric.

The Paris concentration risk

68.3% of all open roles list Paris as the primary location. For a company building global enterprise sales motion, this creates talent pool limits, timezone coverage gaps, and candidate pipeline constraints.

Location concentration
Chart C
Paris dominates. 86 of 126 jobs are tagged to your HQ. That's strategic for culture-building, but limits your addressable talent market.
Paris 86 · 68.3%
Palo Alto 9 · 7.1%
Singapore 9 · 7.1%
New York 5 · 4.0%
All others (10 cities) 17 · 13.5%
What this means

If your Palo Alto, Singapore, and NYC pipelines are thin, over 80% of your open demand funnels through a single talent market. That's a scaling constraint.

Multi-location job complexity
Chart D
47 jobs (37%) list multiple locations in a single string — e.g., "Paris, London, Luxembourg, Marseille". This is great for candidate flexibility but creates reporting noise.
Single location 79 · 62.7%
Multi-location 47 · 37.3%
  • 34 unique location string patterns — from Paris to Paris, London, Berlin/Munich/Frankfurt, Barcelona/Madrid, Amsterdam, Brussels.
  • We'd normalize these into primary location + location flexibility flag for cleaner dashboards.

The remote work paradox

For a cutting-edge AI company, Mistral's remote footprint is surprisingly small: only 4.8% of roles are fully remote. And some of those "remote" jobs still list Paris as the location.

Workplace type breakdown
Chart E
56% onsite, 39% hybrid, 5% remote. This is an in-office culture — which is fine, but it limits your talent pool for hard-to-fill roles.
Onsite 71 · 56.3%
Hybrid 49 · 38.9%
Remote 6 · 4.8%
  • Research skews hybrid (65%); Corporate skews onsite (77%).
  • Only 1 Business (Sales) role is remote — could explain why ANZ and Americas are harder to fill.
Remote metadata inconsistency
Chart F
4 of your 6 "remote" jobs list Paris as the location. That's a mixed signal to candidates — and a reporting problem for TA ops.
Remote + Paris location 4 jobs
Remote + non-Paris location 2 jobs
Example conflict

Site Reliability Engineer is marked remote with location Paris. Is this "remote in France" or "remote anywhere"? Candidates don't know — and neither do your filters.

Job freshness & aging risk

Average job age: 150 days. That's pulled up by 14 roles that have been open for over a year. Without segmentation, these ancient reqs will drag every TTF metric you have.

Job age distribution
Chart G
Only 21% of jobs are fresh (<30 days). A third are actively aging (30-90d), and 32.5% are 180+ days old.
Fresh (<30 days) 27 · 21.4%
Active (30–90 days) 42 · 33.3%
Aging (90–180 days) 16 · 12.7%
Stale (180–365 days) 27 · 21.4%
Ancient (>365 days) 14 · 11.1%
  • 14 roles have been open longer than a year — the oldest is 851 days (2.3 years).
  • Research and Business have the highest concentration of ancient roles.
The oldest jobs on your board
Chart H
These roles are either evergreen pipelines (not flagged as such) or stuck reqs that need attention. Either way, they're skewing your metrics.
AI Scientist - Paris/London 851 days
Account Executive, France 673 days
Talent Acquisition - EMEA 609 days
Site Reliability Engineer 602 days
AI Scientist - Palo Alto 551 days
Impact on reporting

If you calculate "average time-to-fill" across all reqs, these 14 ancient roles add ~90 days to your reported average. Segment them out or tag them as evergreen.

Data quality signals

Lever is well-structured, but small gaps compound. 7.9% of jobs are missing commitment type, and 100% are missing salary data in the public feed (likely intentional).

Missing commitment type
Data Quality
10 jobs have no commitment field. These are ungroupable in FTE planning and headcount reports.
Full-time 110 · 87.3%
Fixed term / freelance 6 · 4.8%
Missing 10 · 7.9%
Salary transparency
Data Quality
0 jobs have salary data in the public feed. This is common for EU companies, but affects candidate conversion in US roles.
Salary visible 0 · 0%
Salary hidden 126 · 100%
  • Colorado, NYC, and California now require salary disclosure — affects 14 US roles.
Seniority distribution
Title Analysis
Based on job title patterns: 70% IC, 13.5% Manager+, 8.7% Lead, 7.9% Senior.
Individual Contributor 88 · 69.8%
Manager / Director+ 17 · 13.5%
Lead / Principal 11 · 8.7%
Senior 10 · 7.9%

What WezOps would do

Your Lever instance is clean. The opportunity is to operationalize the job catalog — treating it as a product that feeds recruiter performance, funnel analytics, and hiring manager accountability.

Phase 1: Job catalog hygiene
Week 1–2
Clean up the current board, establish governance rules, and create a baseline for ongoing quality monitoring.
  • Audit the 14 ancient roles
    • Close or archive stale reqs (>365 days) that aren't intentionally evergreen.
    • Add an Evergreen tag for pipeline roles so they can be excluded from TTF reports.
  • Normalize multi-location jobs
    • Create a "Primary Location" + "Location Flexibility" schema for cleaner reporting.
    • Standardize the 34 location string variations into a governed taxonomy.
  • Fix remote metadata conflicts
    • Reconcile the 4 "remote + Paris" jobs — are they truly remote or France-only?
    • Document a clear policy for how remote vs hybrid vs onsite should be applied.
  • Fill missing commitment types
    • Update the 10 jobs missing this field.
    • Set Lever validation rules to require it on new postings.
Phase 2: Recruiter ops dashboards
Week 3–6
Once connected to internal Lever data (stages, candidates, offers), layer on operational analytics that drive accountability.
  • Job freshness monitoring
    • Weekly alerts when jobs cross 90-day and 180-day thresholds.
    • Auto-flag for TA leadership review.
  • Recruiter workload balancing
    • Assign recruiters to jobs by region/team, track load distribution.
    • Surface imbalances before they become bottlenecks.
  • Funnel analytics by job segment
    • Time-to-fill, pass-through rates, source quality — all indexed by the cleaned job metadata.
    • Separate evergreen from standard headcount in all metrics.
  • Geographic pipeline health
    • Track candidate volume and conversion by location.
    • Surface which markets are underperforming (likely: ANZ, Americas).

Ready to operationalize your talent data?

Let's discuss how WezOps can help you turn job catalog insights into recruiter performance and funnel accuracy.

Schedule a 30-min call