45-60 min culture + technical-fit interview • Point of Sale team
0 / 0 items checked
Overriding frame (thread this through every answer)
"Frontend-track senior who owns the UI product surface end-to-end — defines the problem, sets the architecture, ships, and steps into the backend when the product needs it. DataLinks async ingestion pipeline is the living proof of cross-stack ownership."
Floating 📄 CV button (bottom-right) opens a full-screen overlay at any time during practice.
The single most important minute of the interview. Practice out loud until it's natural, not read.
Hello. My name is Tomasz Wylezek, and I've been working in web development for the last ten years or so.
In that time I've touched most UI frameworks — jQuery, Angular, React, Next.js, Vue — and worked with APIs written in Java, Go, PHP, Python, and Node. I've also had the chance to deliver across the full stack, from backend through frontend and wrapping up infra.
I take full ownership of what I deliver, and I love shipping product, not just closing tickets.
At the same time, I care about the engineering process. I try to document every decision and share context so the team is aligned on what we want to do and how. I like working through RFCs or ADRs and iterating on them, so we always know the why as well as the what.
I've worked in startup environments, mid-size companies, and software houses — but I've never been exposed to finance, and that's one of the reasons Revolut is so exciting to me.
The interviewer will walk step-by-step through the CV. One or two sentences per role, consistent theme: more ownership, more scope, closer to the customer. Zero negative notes about past employers.
I started at JCommerce as a junior while finishing my degree. Learned the craft, shipped my first production features, then wanted broader exposure across more projects and clients.
Moved for breadth — multiple client projects in parallel, learned to ramp on unfamiliar codebases fast. That's a skill I've relied on every move since.
Joined Perform for depth in one domain — sports data collection at production scale. Shipped live data-entry apps for Ice Hockey (NHL), Basketball (NBA and NCAA), and Soccer (Turkish leagues), including a mobile-first PWA with full offline support for operators in stadiums without reliable internet. Those apps ran in production for years — Stats Perform brought me back on a separate contract in 2023 specifically to work on them.
External factor, not a culture mismatch. Joined October 2019; by March 2020 the pandemic hit, car shipping and the supply chain halted, the company went to 7/10 workload across the board, and automotive was still strictly on-site before the industry moved remote. The project I owned shipped; the business conditions during the first COVID wave made it a natural point to move on to work I could fully commit to. Side note: this was my first time working on a UI team mostly made up of non-Polish colleagues, even though I was still based in Poland.
"AUTO1 was short for a clear external reason. I joined in October 2019; by March 2020 the first COVID wave hit the automotive industry particularly hard — car shipping and supply chain halted, and the company went to a 7/10 workload. Automotive was still strictly on-site before the industry moved remote, so it wasn't a stable situation to build in. The project I owned shipped, but the conditions were a natural point to move on."
Scandinavian legal-information platform — a multi-year migration off a legacy system with substantially expanded capabilities. First time I was fully embedded in an external team — the client's day-to-day, not Polish-team-shipping-to-client. That's also where I matured how I give difficult feedback (the Danish dev / 100-comment PR story in Module 3.4).
Step into architectural leadership on an established product. Joined Kuma (OSS service-mesh GUI) as lead frontend developer, owning architecture decisions — bridging OSS and enterprise product shapes onto a shared codebase as the team grew. Once foundations were in place, rotated onto Kong Konnect and the Dev Portal.
Returned to the team I'd worked with at Perform Group — now Stats Perform after the merger. They asked me to help keep the NHL hockey app running through the transition, then I moved to the Core Data team and built the GraphQL CMS that became the authoritative source for sports data across the organization.
Joined for the scope — UI layer of an AI-driven semantic data platform, daily work with CEO and CTO, permission to step into backend work when the product needs it. Scope has been expanding ever since: from UI to UI + backend API + full async ingestion pipeline.
"The through-line across all of these moves is the same: more ownership, more problem definition, closer proximity to the customer. Revolut is the natural next step."
One primary + one backup per vertical, with likely follow-ups. Each primary is 200-350 words / ~90-120s spoken. Backup is <120 words.
At DataLinks I owned the UI layer of our AI-driven semantic data platform.
Users were supposed to import their datasets and start querying them in natural language immediately.
The problem: ingestion was fully synchronous, chunked at 200-300 KB per request, and each chunk went through LLM-driven inference so parse time depended on the number of links and connections in the data.
For a 5 MB file in a realistic shape, worst case was 2-3 minutes of the browser tab blocked on an in-flight HTTP request — users couldn't do anything else on the platform, and we were sitting right at the edge of HTTP timeout territory.
If anything failed mid-import, there was no resume point — users had to rerun from scratch or build external tooling around us.
Unblock real-size datasets — the kind a prospect brings into a POC — without breaking the existing UX or pushing complexity onto the user.
I ran root-cause analysis first.
The long wait was a symptom; the actual issues were three: the sync model blocked the tab, there was no resume-from-failure, and throughput was tied to LLM latency we couldn't control.
I rejected two alternatives: scaling the backend alone didn't fix the tab-blocking problem, and tighter chunking with progress indicators didn't solve error recovery.
The right call was an async ingestion pipeline with job state persisted server-side.
The backend was in Scala — a language I had no prior production experience with.
I stepped in anyway: built the endpoints, owned the job orchestration, and wired it into the UI so users submit and walk away.
Once that was stable I layered a natural-language query inference on top — first draft in the core system, ported from an external POC, then iterated with the team.
Async ingestion shipped to production.
5 MB files that used to block users for minutes now run in the background while they keep working.
Because data lands server-side first, we unlocked a refining step — users adjust or re-run inference on already-ingested data [METRIC NEEDED].
The pipeline became the foundation the NL-query inference layer sits on.
Net ownership: I went from the UI layer to owning the full async pipeline end-to-end.
At AUTO1 the car-lifecycle platform had a handover bottleneck.
Vehicle paperwork took two days to process, blocking delivery and piling storage costs on the lot.
I shifted the flow from sequential manual review to parallelized automated processing — humans only in the loop on exceptions.
Two days became twelve hours.
That directly cut storage cost per vehicle and got cars to customers faster.
The win wasn't the code — it was spotting that the blocker was the workflow, not the volume.
Q: How did you guarantee the async pipeline wouldn't lose data mid-import?
A: Job state persisted server-side with per-chunk checkpoints; retries resume from the last successful checkpoint instead of the top.
Q: What was the hardest part of picking up Scala?
A: Not syntax — the async concurrency model. Leaned on CTO review, shipped a narrow surface first, expanded once I trusted my mental model.
Q: Why take backend work as the UI owner?
A: Ownership at DataLinks is defined by the problem, not the stack. If the UI is blocked and the unblocker is backend, the backend is my problem too.
At Perform Group I owned frontend for three live sports-data collection apps — the tools operators use in real time during matches to feed stats to broadcasters and downstream clients.
Scope covered three sports, each with different operational realities: Ice Hockey (NHL), Basketball (NBA and NCAA), and Soccer (Turkish 1st and 2nd league).
Different event volumes, different crew sizes, different connectivity environments.
Own three apps against one non-negotiable goal — zero data loss mid-match — while matching each sport's real operating conditions.
Hockey and basketball ran paired operators over WebSockets on desktop. Soccer operators were in the field on phones and tablets without reliable internet.
I shipped three apps on a shared architectural spine, with a deliberate split where the environment demanded it.
Hockey: React + Redux + WebSockets desktop client for NHL, paired operators.
Basketball: React + Redux + TypeScript + WebSockets desktop client for NBA and NCAA, paired operators, higher event volume per match.
Soccer: Designed from day one as a mobile-first PWA with full offline support — service worker, local persistence, queued event submission — because Turkish-league operators worked in venues where reliable internet wasn't guaranteed.
I decomposed per-sport event models, shared WebSocket protocol conventions across hockey and basketball, and bounded the offline-sync logic to soccer where it actually paid for itself.
Hockey Client: 1,200+ NHL matches covered, 35 operators, ~200 events per match.
Basketball Client: 2,000+ NBA and NCAA matches covered, 50 operators, ~500 events per match.
Soccer Client: Turkish 1st and 2nd league coverage from mobile devices, offline-first.
The part I'm proudest of is longevity. In 2023 — four years after I'd left — Stats Perform brought me back on a separate short-term contract specifically to update auth and data flow on these apps, because they were still in active production use.
When I came back, the team leader and PO walked me through the bug board: the items on it were almost all feature requests dressed as bugs — "X doesn't export in format Y" — not real defects. The apps had held up in live production across a company merger.
I delivered the auth and data-flow update in six months; the contract was extended, and they moved me onto Node backend work despite no prior Node production experience. That trust was a direct outcome of how the original apps performed over four years without me.
This is the goal-setting story I bring to Revolut: set the quality bar upfront, decompose by real operating conditions rather than forcing uniformity, and build so the work keeps paying off long after you ship it.
On the Core Data team we had a goal to replace an aggregated legacy sports-data system.
I built out the GraphQL CMS backend meant to be the authoritative source across the org.
The hard part wasn't the platform itself — it was growing it without breaking the dozens of downstream apps migrating onto us.
I collaborated directly with business stakeholders on demand forecasting and roadmap planning — new muscle for me.
The legacy system is now substantially displaced.
Q: Why PWA only for soccer and not for hockey or basketball?
A: Different operating conditions. NHL and NBA arenas have stable press-box networks and paired operators on desks — desktop with WebSockets was the right tool. Turkish-league venues didn't guarantee internet, and soccer operators were mobile during the match. Offline-first paid for itself there; it would have been over-engineering on the other two.
Q: How did you measure success while building?
A: Two levels — operator UX under pressure (can they keep up with live action?) and downstream data integrity (do events arrive in order, without loss?). Paired operators gave us a natural cross-check on desktop; on the soccer PWA the local queue was ground truth, flushed when the connection returned.
Q: What did you actually change when you came back in 2023?
A: Auth flow rewrite post-merger — identity provider had changed — and data-flow patches on the pipeline feeding the apps. The UI layer mostly held; that was the proof point.
Q: Why frontend-track at Revolut if you've been full-stack?
A: My home is the UI product surface. Backend work I do is always in service of the product the user actually touches.
| Metric | Target | Actual | Timeframe |
|---|---|---|---|
| Delivery cadence with CEO/CTO | Shippable increment every 1-2 weeks | Sustained — something shipped or deliberately killed each cycle | Weekly |
| Ingestion file size serviceable without blocking UX | Multi-MB files, async, no tab-blocking UX | Before: 200-300 KB sync chunks, 5 MB = 2-3 min blocked tab. After: async, no UX limit. |
Quarterly |
| Scope ownership expansion | Step into backend/infra when value demands | Expanded from UI-only to backend API + async ingestion pipeline | Rolling |
| Feature adoption post-ship | Features land and get used, or are killed fast | Active ship-and-kill discipline — unused features retired quickly | Rolling |
I want to be honest about this up front.
DataLinks is a small AI startup and I work directly with the CEO and CTO daily.
We don't run formal OKRs, and I'm not tracking against a quarterly dashboard.
What I do track is weekly outcome alignment — what matters this week, what shipped, what got killed, what's next.
My job as a senior is to make the right ship-vs-kill calls, not just execute, because I have bandwidth to move one big thing per cycle and I need to pick correctly.
Clearest example: the ingestion decision.
User complaints about the minutes-long waits and HTTP timeouts were landing as "speed up the load" / "make the wait less painful" requests.
I could have taken them at face value.
Instead I pushed back in our weekly sync: the wait time is the symptom, the sync model is the disease, and a cosmetic fix buys us two weeks before the same customer asks why their 20 MB file fails halfway.
I proposed the async pipeline work — bigger, in an unfamiliar backend, pushing out two smaller UI items.
I got the go-ahead, shipped it, and the wait-time complaints stopped entirely.
That's the shape of my KPI: not "wait times tweaked," but "right problem solved."
Since I joined, the rhythm has held — something real ships or gets killed every cycle.
Scope I own expanded from UI-only to UI + async ingestion + parts of the backend API.
[METRIC NEEDED] features shipped vs. killed last quarter, prospects unblocked by async ingestion.
If you want to see me against formal KPIs, I can also talk about Stats Perform with hard numbers on downstream CMS consumers migrating off legacy.
On Core Data we had concrete KPIs: number of downstream apps migrated onto the new GraphQL CMS, legacy-system dependency reduction, and roadmap-commitments-met with business stakeholders.
I worked to those targets for almost two years.
Measurable progress was teams cutting over — which grew steadily while legacy traffic dropped.
[METRIC NEEDED] exact migration count or percent displacement.
That's why I'm comfortable working to structured metrics when the team runs that way.
Q: If you had to define three formal KPIs for your role tomorrow, what would they be?
A: Shippable increment per sprint, scope-expansion events per quarter, and a user-facing reliability metric for ingestion and query — because those are what CEO/CTO actually ask about informally today.
Q: Without formal KPIs, how do you know you're working on the right thing?
A: Weekly alignment with CEO/CTO and direct feedback from prospects. If the right thing isn't obvious, that's a signal I'm not close enough to the customer.
Q: Does the lack of formal tracking worry you about joining Revolut?
A: No — I've worked to formal targets at Stats Perform and Kong. I know what good looks like and I'll adapt to whichever cadence the team uses.
At Leocode I was on a team building a legal-information platform for the Scandinavian market.
One of the devs on the team, working out of Denmark, submitted a pull request.
I left over 100 comments on it.
The next thing I heard was that she'd gone to our manager to say I was "nitpicking."
Two things that pulled in opposite directions.
Keep the technical bar up — most comments were legitimate: conventions, missing tests, things we'd agreed on.
And repair the relationship — I wanted to keep working with her long-term, not win a single review.
I asked her for a call directly — not through the manager, not written back-and-forth.
I owned my side first.
A hundred comments without context reads as an attack on the person, even when they're all about code.
I walked her through the why behind the comments — not defending each one, but making intent visible: conventions, tests, things we'd agreed to, pointers to fix rather than verdicts.
I proposed a new process: when I had a lot to say on a PR, I'd call her first, we'd talk it through, and the review would capture what we agreed.
She agreed to try it, and said she now understood the comments were on the code, not on her.
We worked together productively for another six months.
The call-first process stuck.
When she visited Warsaw later she reached out to meet up — which told me the relationship was genuinely repaired, not just patched over.
The technical bar didn't drop, and I learned a lasting lesson: review the shape of the feedback, not just the content.
We had a contractor at Stats Perform who produced nothing after two weeks.
"In progress, about to push" — nothing landed.
I tried calls several times to help; he was always "too busy."
After a month he produced a single commit that was genuinely not reviewable.
I'd given him coaching time and he wasn't engaging.
I went to the team lead with specifics — calls attempted, the one commit, the gap against the sprint plan — and he was let go shortly after.
Coaching is senior work, but so is knowing when it's no longer coaching and you're protecting the project.
Q: How do you give tough feedback without it landing as personal criticism?
A: Three rules: comments on the code, not the person; ask a question before making a verdict; link to the why, not just the what. Same approach I use teaching Coders Lab bootcamps.
Q: Have you ever changed your mind during one of these conversations?
A: Yes — the Leocode call changed how I run PR reviews. I'd assumed volume of comments was the measure of thoroughness; I learned it's also the measure of how hard the PR is to respond to.
Q: Difference between coaching a junior and giving up on one?
A: Whether they engage with the feedback. If someone misses deadlines but tries, I stay in. If they avoid the conversation, I escalate.
Two short STAR stories for "tell me about delivering under time pressure."
At AUTO1 vehicle handover paperwork took two full days per car — blocking delivery, stacking storage costs, and the backlog grew faster than operators could clear it.
Get processing time down so storage and delivery slots weren't being eaten by paperwork.
Shifted the workflow from sequential manual review to parallelized automated processing — humans only in the loop on exceptions.
Prioritized high-volume document types first for fast impact; edge cases into a second iteration.
Two days became twelve hours.
Storage costs per vehicle dropped; cars reached customers faster.
Lesson: under pressure, the fastest path is often changing what you're doing, not doing the existing thing faster.
When the US imposed new tariffs, the commercial team needed a fast, customer-facing response showing how our dataset-curation tool could support tariff-reaction use cases.
Our CEO vibe-coded a weekend prototype against that use case. Once we saw it worked, we had roughly a two-week window to turn the prototype into a production-integrated tool customers could actually use — and our natural-language query layer was still in flight.
Take a weekend prototype to reproducible production, integrate it with our existing query language, and make it externally consumable with billing and auth — before the commercial window closed.
I took ownership of the productionization stream and parallelized two tracks rather than completing one before starting the next.
Track 1 — core integration: ported the prototype's logic into our system, made it reproducible, wired it into our query language, and iterated with LLM evaluation tooling (WebSearch for benchmarking, Langfuse for tracing, other eval frameworks for prompt-level scoring) to validate output quality before shipping.
Track 2 — external access + billing: built a standalone Python agent customers could hit directly, wired it to Stripe for API-key issuance and billing, persisted the key binding in our DB, and secured the agent-to-main-service path with hashed-header token validation.
Shipped both tracks inside the two-week window.
The company went from weekend-prototype screenshot to a real, authenticated, Stripe-billed customer endpoint.
Lesson: under a hard deadline, parallel tracks with clear contracts between them beat hand-finishing one before starting the next.
Three things draw me to Revolut specifically.
First, the domain. I've spent the last three years in sports data and AI, and I want to learn fintech. The constraints are different, the scale of impact is different, and fintech is a domain where a UI decision can move real money for real users. That's a kind of weight I haven't worked under and I want to.
Second, the pace and the feedback loop. At DataLinks I work directly with CEO and CTO, shipping something real every week. Revolut lets me keep that cadence at a scale where the decisions matter well beyond one customer. Fast feedback loops are how I do my best work.
Third, the people. I've heard directly from someone already inside that the work is intense but the team and the problems are worth it, and there's genuine domain knowledge to pick up. I take that signal more seriously than any recruiter pitch.
Underneath all three — I like being close to customers. Customer-facing work is where I see the impact of what I ship, and customer-facing is exactly where Point of Sale sits. That's part of what makes this role concrete for me, not abstract.
When the interviewer names a value, you have a story ready. These are the six values (with the three they said to emphasize in yellow).
"Dive deep until atoms. Start with WHY?"
"Ideas are great, execution is everything."
"Put yourself in the shoes of the customer."
"10x further from where we are now."
"Radically honest, direct, and respectful."
"End-to-end responsibility; never 'someone else's problem.'"
Start with "WHY?" — Articulate the problem; dive to root cause; question assumptions.
Never lose 'North' — Think beyond the task; scalable frameworks; avoid analysis paralysis.
Be open minded — Invite criticism; no politics; consider feedback regardless of title.
Commit and execute — Can-do attitude; persevere until finished; deliver on commitments.
Act like an owner — End-to-end ownership; never "someone else's problem"; self-direct.
Put product first — In the shoes of the customer; every detail; don't ship unless ready.
Keep it simple — Minimise friction; decide what to build and kill; bottom line up front.
Shoot for the moon — Disrupt, scale, reinvent; bold ambitious goals.
Push the envelope — Challenge from all angles; run toward critique; go above and beyond.
Jump in with both feet — Stretch assignments; share optimism; energy under adversity.
Ship, shipmates, self — Work together; support across departments; be inclusive.
Be radically honest, direct, and respectful — Clear feedback; speak up; best tone of voice.
Never compromise on talent — Thoughtful hiring; mentoring and coaching; act on underperformance.
Lead by doing — Roll up your sleeves; enable others; accept responsibility when things go wrong.
Strategy: first question about product/team, not conditions. Follow up on answers — drilling beats moving to the next. Save a personal question for the closer.
Ask upfront if you're still waiting for the team-fit invite to be scheduled:
"Thanks for taking the time. Before we start — I haven't received an invite to schedule the team-fit interview yet, so I'm wondering when I can expect it."
External factor, not a culture mismatch. Don't defend, don't criticize AUTO1 — state circumstances.
Name it honestly and pivot to Stats Perform when the interviewer pushes on structured KPIs.
Reframe as ownership mode. Not "I picked up Scala," but "I step in when the value requires it."
Ground consistently. Don't let the DataLinks cross-stack story drift toward "I want to be a backend engineer."
Kong Kuma is the closest analogue. Frame Revolut as the formal stretch, not a weakness.
Acknowledge lightly if raised; don't invent expertise. Language rarely is the hard part.
Not a gap to reframe — a real thing to probe in the team-dynamics questions. Listen carefully; if you can't get clarity on how the team works together, that's information.
From the recruiter brief — this is what the hiring manager will probe on, especially at senior+.
Impactful solutions to difficult and complex issues. Interviewer wants to understand: problem space, importance, business impact (the bigger the scale the better), how you identified the root cause, solution, quantifiable outcome. Demonstrate ownership.
Your hook: DataLinks async ingestion — root cause (sync model is the disease), alternatives rejected with reasoning, cross-stack ownership, outcome ties to POC unlocks and NL-query layer.
Goals you've achieved in your career, and whether you're ready for goals at Revolut. Strong examples with quantifiable success.
Your hook: Perform Group three-sport portfolio — defined the quality bar upfront (zero data loss mid-match), decomposed architecture per sport (desktop WS for hockey/basketball, mobile PWA offline-first for Turkish soccer), and the apps ran in production for four years without me before Stats Perform brought me back in 2023 to update auth and data flow.
Juicy breakdown of current KPIs/targets. Understanding of your key objectives.
Your hook: Honest frame — small AI startup, no formal OKRs, weekly CEO/CTO alignment, ship-vs-kill cadence. Proxy table ready. Pivot to Stats Perform formal KPIs if the interviewer wants structured numbers.
Senior+ expected to organize / lead. How you give feedback; how you mentor; difficult situations with juniors and how you handled them.
Your hook: Leocode 100-comment PR. Took the complaint to manager seriously, did a direct call, owned the form-of-feedback problem, rebuilt the review process, kept the technical bar, kept the relationship.