Rendered at 20:30:21 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
aaztehcy 1 days ago [-]
Database performance issues are usually a mix of missing indexes, suboptimal query plans, and no proper monitoring to catch slow queries early. Add to that -- most teams don't have a dedicated DBA, so problems accumulate until something breaks in production.
At UPSystems we do PostgreSQL and MSSQL -- from query tuning and index optimization to full HA setups (Patroni, Always On AG) and migrations (MSSQL to PostgreSQL is a common one). If you want a quick assessment of your setup, DM me.
thitami 1 days ago [-]
The JSONB approach for time-series is pragmatic for this scale. The 90-day sleep query concern is real though — have you considered a partial index on the timestamp field within the JSONB, or is the aggregation layer from Terra making that unnecessary?
Also curious about the MCP server design: are you streaming responses back to Claude or returning complete payloads? For trend analysis over 90 days that could be a meaningful difference in perceived latency.
anton_salcher 1 days ago [-]
Good distinction, but the 90-day trend queries actually don't touch JSONB at all, because trends hit scalar columns (avg_hrv, duration_seconds, start_time) where a regular B-tree index is sufficient. The JSONB arrays are only used for sample-level queries like "show me my HR during last night's sleep" which are inherently single-session lookups, not range aggregations.
On streaming: currently returning complete payloads. For this use case it hasn't been a problem, because the trend queries aggregate 90 rows of scalar data which is fast, and the response is compact text. Streaming would make sense if I were piping large sample arrays directly, but those get aggregated server-side before returning. Worth revisiting if I add something like full workout trace exports.
silbercue 42 minutes ago [-]
> Streaming would make sense if I were piping large sample arrays directly, but those get aggregated server-side before returning
thitami's point on perceived latency is the thing I'd still watch. had the same "its fine, its fast enough" moment in a different MCP context (iOS simulator automation)... turned out the 300ms per-tool-call was what made the whole loop feel sluggish even though each individual query was "fast". got it down to ~20ms and Claude felt noticeably less hesitant between calls after that. cant fully explain why but the behavior shift was real.
probably not your problem at 90 rows of scalar data =) just a thing I wish I had clocked earlier.
KaiLetov 1 days ago [-]
Nice, I've been messing around with MCP servers lately too. One thing I ran into, Garmin's Connect API has pretty tight rate limits, something like 25 requests per 15 minutes if I remember right. Did you hit that? Also wondering if you're storing raw data in Postgres or just aggregated stuff. Because with sleep tracking you get a datapoint every 30 seconds, that adds up fast.
anton_salcher 1 days ago [-]
Thanks, yes I also saw the Garmin MCP and I also tried the TrainingsPeaks MCP. I have a Polar Watch so I needed to build something for myself. I use Terra API for my Data Pipeline. So they normalize and aggregate my Data (so Terra handles all provider-specific rate limits). Storage is a mix, every workout, sleep or daily record gets its own with row extracted summary fields as typed columns (HR, HRV or duration) plus the raw time-series stored as JSONB arrays.
Your point about sleep tracking is real but the JSONB arrays compress well in Postgres and a night's worth of 30s data is 1-2K data points, so it's manageable. The bigger concern is the query performance when you need 90d of sleep data.
what MCP Servers have you tried out, something similar?
kraftaa 1 days ago [-]
Any chance you would add apple watch or mi band? And is it possible to combine the info from 2 devices in one account? I use them for different purpose.
Glad, I have a free trial without email verification :)
anton_salcher 1 days ago [-]
Hi, so Apple Watch is only available via SDK, so we would need an app for the connection, which we are currently working on but it will take some time. Mi band is also not available but we will be adding new wearables.
In general yes it is possible to combine two or as many devices as you like. My cofounder uses Whoop (sleep), Garmin (cycling) and Withings (weigh). That is a clear USP for us, that we combine different sources and normalize them in one single platform.
Thanks for signing up though, do you have some feedback? And I am sorry that we don't provide your wearables (yet)
kraftaa 1 days ago [-]
thank you! Not yet, as I couldn't connect my devices. I'll try to revive/run and connect my old polar, and I do love the design and how smooth it is.
anton_salcher 1 days ago [-]
thanks so much, your Mi band, does it connect with "Zepp" because I can add this to our providers. So does your app, where your see your Mi band data is called Zepp? I just looked it up a bit deeper and yes this could be easily possible ;)
anton_salcher 1 days ago [-]
so yes, I just added Zepp to our providers, if this is the right app you can now try the Pace MCP ;)
kraftaa 23 hours ago [-]
Thank you, they switched from Zepp to Mi fitness. I'll try my polars.
anton_salcher 1 days ago [-]
I was a former professional athlete and built this mainly for myself. I wanted to analyze my training in Claude. When I first tried it, it was amazing. So I wanted to build it for everyone. So I created a small Dashboard and a OAuth flow and now everyone can try it.
At UPSystems we do PostgreSQL and MSSQL -- from query tuning and index optimization to full HA setups (Patroni, Always On AG) and migrations (MSSQL to PostgreSQL is a common one). If you want a quick assessment of your setup, DM me.
On streaming: currently returning complete payloads. For this use case it hasn't been a problem, because the trend queries aggregate 90 rows of scalar data which is fast, and the response is compact text. Streaming would make sense if I were piping large sample arrays directly, but those get aggregated server-side before returning. Worth revisiting if I add something like full workout trace exports.
thitami's point on perceived latency is the thing I'd still watch. had the same "its fine, its fast enough" moment in a different MCP context (iOS simulator automation)... turned out the 300ms per-tool-call was what made the whole loop feel sluggish even though each individual query was "fast". got it down to ~20ms and Claude felt noticeably less hesitant between calls after that. cant fully explain why but the behavior shift was real.
probably not your problem at 90 rows of scalar data =) just a thing I wish I had clocked earlier.
Your point about sleep tracking is real but the JSONB arrays compress well in Postgres and a night's worth of 30s data is 1-2K data points, so it's manageable. The bigger concern is the query performance when you need 90d of sleep data. what MCP Servers have you tried out, something similar?
In general yes it is possible to combine two or as many devices as you like. My cofounder uses Whoop (sleep), Garmin (cycling) and Withings (weigh). That is a clear USP for us, that we combine different sources and normalize them in one single platform. Thanks for signing up though, do you have some feedback? And I am sorry that we don't provide your wearables (yet)