Hacker News new | past | comments | ask | show | jobs | submit | devnull3's comments login

Barring the interactivity SPAs will also end up talking to server anyway. So even SPAs will feel sluggish in a high latency env.

I think optimization triggers fundamental instincts in humans:

1. Tracking i.e. navigating through jungles for hunting or find where the bottleneck is.

2. The thrill of the hunt. The joy after finding the hotspot or the bottleneck. It makes your day.

3. The actual hunt i.e. shooting an arrow/spear or using clever way of fixing.

4. Brag about the hunt or writing a blog post or tech talk on how difficult and awesome the hunt was.

Also optimizing a critical path has multiple takers in the org:

1. Product Managers are happy due to improved user experience.

2. Management is happy as in some cases it actually saves a lot of money.

3. Your boss is happy because he/she gets to score points as "team achievement" when evaluation is around the corner.

4. Engineers get to do nerdy things which otherwise would not find traction in the org.

All in all its a win-win-win-* situation.


Ah yes, completely unfounded behavioral "science" to explain a modern thing. My favorite.

Shouldn't the service daemon upgrade schema and perform migrations on initialisation?

Before performing upgrade make a copy of the db file. Also multiple DBs can be inited in parallel.

I fail to see why this might be so hard.


> hypervigilant

If a tech works 80% of the time, then I know that I need to be vigilant and I will review the output. The entire team structure is aware of this. There will be processes to offset this 20%.

The problem is that when the AI becomes > 95% accurate (if at all) then humans will become complacent and the checks and balances will be ineffective.


80% is good enough for like the bottom 1/4th-1/3rd of software projects. That is way better than an offshore parasite company throwing stuff at the wall because they don't care about consistency or quality at all. These projects will bore your average HNer to death rather quickly (if not technically, then politically).

Maybe people here are used to good code bases, so it doesn't make sense that 80% is good enough there, but I've seen some bad code bases (that still made money) that would be much easier to work on by not reinventing the wheel and not following patterns that are decades old and no one does any more.


I think defining the places where vibe-coded software is safe to use is going to be important.

My list so far is:

  * Runs locally on local data and does not connect to the internet in any way (to avoid most security issues)
  * Generated by users for their own personal use (so it isn't some outside force inflicting bad, broken software on them)
  * Produces output in standard, human-readable formats that can be spot-checked by users (to avoid the cases where the AI fakes the entire program & just produces random answers)

We are already there. The threshold is much closer to 80% for average people. For average folks, LLMs have rapidly went from "this is wrong and silly" to "this seems right most of the time so I just trust it when I search for info" in a few years.

It is frankly scary seeing novices adopt AI for stuff that you're good at and then hearing about the garbage it's come up with and then realising this problem is everywhere.

Gell-Mann amnesia. After I saw the subtle ways LLMs can off mark on things I know about, I am very wary to use it for any subject I don't dominate. I don't want to learn some plausible nonsense.

Except that we see people in this very thread claiming they shouldn't review code anymore, just the prompts. So however good it is now is enough to be dangerous to users.

I think an in-between approach could be:

1. Identify top-N tenants

2. Separate the DB for these tenants

The top-N could be based on mix of IOPS, importance (revenue wise), etc.

The data model should be designed in such a way that from rows pertaining to each tenant can be extracted.


This is how we did it at mailchimp. I think this is ignored or overlooked because this means devs might have to care a bit more about operations or the company has to care more.

That is what most tenanting scale-ups do. "Jumbo"-tenants get relocated either to separate partitions, or to partitions which are more sparsely populated.

To be fair, if we know something will happen with high certainty then its not much of a prediction.

The fact that no one really knows how LLMS will pan-out means every projection in future is a prediction.


I echo the parent comment. It's not the fact that predictions are made, it's the fact that no certainty interval is ever given, and everything is pronounced with complete confidence.


It will still be much lesser than perma Cambrian explosion in js frameworks.

Infact, a lot of the patterns in the likes of HTMX will be standardised.


On thing about datastar vs htmx: Datastar embraces htmx's OOB swap and makes it a first class concept.

HTMX's OOB swap appears to be an after thought.


I don't know V but mostly what I have read about V is that it is vaporware [1]. Is there truth to this impression?

[1] https://www.google.com/search?client=firefox-b-d&channel=ent...


I have seen this thrown around a lot, but I do not think it is true anymore.

V lang had a rough launch from what I can tell, with the author overselling and mistakenly underestimating the amount of work needed to fulfill their vision.

V still has a ways to go, but it is in constant heavy development with lots of contributors. It also has a wide gambit of interesting hobby projects using it.

I'd recommend taking a look at the the examples here:https://github.com/vlang/v/tree/master/examples

I think the language has a lot of potential.


Go look at the commit log and tell me if it looks like abandonware.


They said vaporware, which isn't necessarily the same thing as abandonware. It still could be vaporware if it never lives up to delivering what it promised. I'm not sure on the current state of those promises though, so I don't know if the outlook of it being vaporware is true anymore. At one point it looked like it, but I haven't kept up with it enough.


This is a good POV. For a while there they did think they had a chance at finally figuring out how to solve the halting problem. They of course haven't, and had no chance to do so, but they wanted to discover that for sure, for themselves. So I admire them for trying. These days autofree handles about 95% of allocations, but what's left dangling is pruned by their very low cost tight cleanup looped GC. If you really need to you can do manual memory management, from my fairly substantial usage of the language so far I've been a satisfied customer for sure. I've found compiler bugs, strange behaviour, edge cases etc., yes the team are blunt and to the point, but they're the most professional language team I've ever interacted with, often responding within hours, one time a compiler fix was released the next day.


Everything that's listed on the home page is delivered.



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact