Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What technology is “cutting edge” in 2022?
104 points by MathCodeLove on Jan 24, 2022 | hide | past | favorite | 121 comments
I feel like I'm still hearing about the same ML and AI advancements that were purported in 2016. What's some technology that is actually cutting edge and unknown to most laymen?



For programming languages, dependent types.

DT is a hot topic in the PL community recently. It massively enhances the capability of a type system by turning it into a comprehensive logic system, so you can encode whatever properties you'd like to enforce into a type signature. Theorem provers have been taking advantage of the Curry-Howard correspondence for some time, but the implication of DT on real-world programming is still not well understood (we need more real-world projects written in DT languages). There are also ambitious projects that want to bring DT into the mainstream.

If you are interested, you can take a look at Lean [1], Idris [2], and a few others [3,4]. Often these languages have esoteric syntax, but there are projects using a more conventional syntax, too, e.g. Cicada [5]. "The Little Typer" [6] is a pretty good introduction to this topic.

[1] https://leanprover.github.io [2] https://www.idris-lang.org [3] https://github.com/agda/agda [4] https://coq.inria.fr [5] https://cicada-lang.org [6] https://mitpress.mit.edu/books/little-typer


ATS Language¹ is another language I would add to this list. First released in 2013, it tries to follow closer to performance and minimalism of C which can make it a good candidate for systems programming².

¹ http://www.ats-lang.org

² https://www.youtube.com/watch?v=zt0OQb1DBko "A (Not So Gentle) Introduction To Systems Programming In ATS" (2017)


That ATS video is really good. I think ATS is really fascinating, as it promises full control over the memory layout while also offering strong safety guarantees.


Regarding Lean I would recommend to check Lean4[1] instead. They rewrote it in Lean itself and now it has much cleaner design[2][3][4].

[1] http://github.com/leanprover/lean4

[2] https://leanprover.github.io/papers/lean4.pdf

[3] https://leanprover-community.github.io/lt2021/slides/leo-LT2...

[4] https://leanprover-community.github.io/lt2021/slides/sebasti...


Dumb question, is this expressed in TypeScript by the way it offers the ability to write logic inside types using keywords like `infer`, `extends`, `in`, etc?

  type Foo<T extends Record<any, any>> = {
    [K in keyof T]: T[K]
  }


Not a dumb question. Answer is no, TS's features are in general not sufficient to express dependent types. It would need to stop making a distinction between types and values, and allow running arbitrary (total) functions in order to define a type, at the least. Experiment with a toy implementation: <https://thelittletyper.com/#pie>


Spatial Finance aka Geospatial ESG: Measuring the environmental impact of companies from space to assess how green a company is. Sustainable investors need this information to overcome the problem of greenwashing. Here is some more information on this topic:

[1] https://www.wwf.org.uk/sites/default/files/2022-01/Geospatia...

[2] https://www.cgfi.ac.uk/spatial-finance-initiative/

[3] https://www.oxfordeo.com/post/near-real-time-water-stress


Wow! This is something we are pivoting into after almost a year of working with Geospatial tooling for Agriculture. We see this come up a lot - tracking and analyzing Satellite date for sustainability and to curb climate change.

Well, almost everyone are looking from a branding angle but if good things comes out of it, so be it.

We kinda invented a variation of super resolution for satellite imagery. We are still at that early stage of ML but I’m amazed by what my co-founder does in AI/ML.

Still early, lots of interesting things and finding it really hard to pick just the tiny few to focus on.


What other use cases do you see for remote sensing data, aside from agriculture and climate change tracking?

Asking because our research team (social sciences, but interdisciplinary) is developing middleware and tooling for working with publicly accessible remote sensing data in Julia.


There are quite a lot to demands from fraud and change monitoring.

For instances;

1. Govt-Private billion dollar construction mega-projects needs regular tracking of discrepencies on area of construction, excess or the lack of areas covered, etc.

2. Planning of train tracks, especially in developing countries, and the eventual tracking of it with proof of work done, etc. Drones cannot fly all the time (weather), costly area of coverage.

Commercially available data, not just from optical but SAR[a] is the other thing. SAR may in-fact be more appropriate to track, count trees etc (if the cost becomes more economical). East Asian regions uses mostly SAR to look at Palm Trees and the likes.

a. https://en.wikipedia.org/wiki/Synthetic-aperture_radar


Do you work in the field? And is the Uk a good place for startups in it? Thanks


No, but I would like to start a startup / open source project in this area. I maintain the website OpenSustain.tech and through it I came in contact with various projects that are being created in this area. I think UK is a good location for a startup in this field. There are a lot of investors here and a lot of funding for sustainable projects.


Very cool. Do you mind if I send you an email at team@protontypes.eu? We have an on-going (academic) research effort that may fit the criteria on your contribution guide : )


Please contact me here: tobias.augspurger@protontypes.eu


Interesting! 2 Years ago I was reading that there is exponential growth in remote sensory data (drones and satellite due to CubeSat and decreasing cost in lunching material into space.) Do you see growth in that area or where would you get more information about it?


> Sustainable investors

Why is this a thing? Sounds like something that limits the investor in his investment choices.


Ever heard of an investment thesis or focus? You don't have to be an altruist to realize that massive investments and huge markets are being made in sustainability. How many pledged net zero companies are there? How many governments have committed billions?

Carbon markets are booming, for one, and there aren't enough effective projects to accept the money.


Some people think long term profits can only happen in a non-apocalyptic world, so they try to let humanity stay alive, I guess.


This seems to be build on the (IMHO wrong) premise that non-investing it will somehow change the minds of these non-sustainable businesses, where in reality, by investing into a business, you get voting rights to actually influence in which direction a business is heading.

Thinking about it the other way: By not investing is such non-sustainable companies, only the people who don't care will do. And they will actually get the companies are a discount.


> by investing into a business, you get voting rights to actually influence in which direction a business is heading.

Or you just get entangled in the political battles already happening in that company, which will take more effort and time to untangle to enable you to actually steer the ship into some kind of sustainable business.

So not only you'll be fighting the lack of a sustainability mindset/processes but all the entrenched political powers that made this mindset be realised. Throwing good money at bad stuff is not really a good prospect for investing, better to invest in places that aren't dirty from the get-go.


Or you can just handover your voting rights of your shares to some NGO.


Sure.

But the companies who actually do good stuff need money and I'd rather give it to them than the others. I also think that investing in these companies will rather change you than you will change them.


When you are buying shares on the stock market, you are not giving money to a company but to the previous owner of the shares.


Because the market (= retail investors and "green" companies) demands it.


- Retro game GAN upscaling, although it appeared in 2019 (ESRGAN). I think this will explode this year. Enhanced Super Resolution Generative adversarial neural networks https://www.theverge.com/2019/4/18/18311287/ai-upscaling-alg...

- Demakes of computer games into 16bit titles, weird and useless but there is something here interesting going on with it, https://youtu.be/qtNytQXVnx8


Man I hope upscaling loses fashion soon, or eventually.

The amount of real data we're going to overwrite accidentally or even just mentally forget about is insane. I prefer looking at the real life rather than the guesses of what a computer thought may have been within the cracks.

Something being naturally ugly is better than being sometimes beautifully fake and uglier and fake.

The FF7 video in that post is absolutely terrible and shows how much worse it can get when upscaling. It just completely fails on the creative art decision to dither lighting, and thinks it was a texture on the original object.


You almost talked me out of 3D Pacman and Galaga.

Nah.



DLSS has been a huge step forward for anti-aliasing and gaming in high resolution (and I'd argue it's still cutting-edge, because AMD hasn't managed to copy it to the same quality level).


Similarly, I'm a big fan of AI video upscaling:

https://www.extremetech.com/extreme/308434-star-trek-voyager...


GANs are impacting all stages of the game pipeline ;)

https://developer.nvidia.com/blog/gancraft-turning-gamers-in...


Zero-knowledge proofs. Several major cryptographic achievements have been unlocked only in recent years, facilitating both privacy and efficiency for certain setups that were only theorized before. There’s still a lot to be done for developer tooling and libraries. The foundation is solid but the ecosystem is nascent.

So far real-world applications have been mostly in the blockchain/cryptocurrency space (privacy/anonymity for Zcash and Tornado/AZTEC, Ethereum L2s, Bulletproofs in Monero, etc) but there’s so much untapped potential for other domains still.


ZKP are used in the covid vaccination passport systems for both The Netherlands and New Zealand.

Ours (I'm part of the NL team) is open source: https://github.com/minvws


Where can I read more about ZKP in the context of your project? I went through the repo but could not find anything related to ZKPin there.


The documentation isn't that great, but the implementation is here:

https://github.com/minvws/nl-covid19-coronacheck-idemix

We use IDEMIX to create a unique proof per display of the QR code. So you cannot track a person as they use their QR code (if you have control of a network of scanner apps).


That sounds, in theory, like a pretty cool use for ZKP. However, I'm skeptical how you fulfil the other requirements that vaccine QR codes generally have without also providing a workaround for tracking.

For instance, most vaccine QR codes are not considered valid by themselves, as I could simply share my QR code with my friend. So I need to present valid photo ID alongside the QR code.


Computerphile did a great YouTube video on ZKP: https://youtu.be/HUs1bH85X9I


I'm the author of one of the open source NZ vaccination passport libs. I'm pretty familiar with the specification and afaik I know theres no use of ZKP's. Its a fairly standard public/private key cryptography. Happy to be proven wrong though!

Here's the spec: https://nzcp.covid19.health.nz/ and our implementation https://github.com/vaxxnz/nzcp-js


I'm indeed wrong, I had understood from a discussion with someone involved in the project that it was so :) Just had a chat with my buddies at mattr who corrected me :)


I’m in NZ. Can you link the source for NZ using zero knowledge proofs? Very curious to read it thanks


Any chance you are looking for more members - at least on the web aspect?


[flagged]


It seems likely you are just trying to get a rise out of people, but I'll bite:

Which tech do you mean, and what do you think makes it terrible?


I think that he's confusing "crypto" - as in the Ponzai scheme - with cryptographic techniques. The latter is very interesting and has real world applications. The former is a scam which is making a small group very rich and will eventually wipe out a load of suckers.


[flagged]


Okay, why do you think those are terrible though? Aren't they a useful tool, provided they are implemented in a privacy-preserving manner (which I suspect the ZK-proof implementation is all about)?


Anybody have a good resource to read up on the basics of how this works and what use cases there are?



I thought this video was pretty good at explaining the basics, actually.

https://youtu.be/fOGdb1CTu5c


Wireguard.com


vitalik.ca


Here's a list I maintain and keep an eye on:

- WebAssembly / WASI on the server (for serverless tech, cost efficient containers, plugin environment for systems, etc) https://twitter.com/vettijoe/status/1484507483788161026?s=21

- CRDT and its implications on local-first sofwares (for me this is a better bet as a technology to architect your solution, much better than what blockchain provides)

- Provably correct programs (all the latest compilers like rust, kotlin and swift are shipping with a flavour of this idea for example it's possible to write null-error free programs in all those languages)


Why would WASM be more cost efficient than native code (~20% more perf efficient than WASM)?


I think that GP is referring to WASM being more cost efficient than Kubernetes.


AR (augmented reality).

Considering how good AR is _right now_, we have very few practical applications for it. It has been quite good for at least a couple of decades (I studied AR in the 90s and we had some pretty amazing demos even back then).

Snapchat is doing some amazing, if silly, AR stuff, and hardware capabilities around graphics are improving all the time (especially with the use of graphic cards for crypto mining), in fact graphics are getting to the stage where its genuinely difficult to see at a glance what is real and what isn't.

Basically this is what Facebook is going for with the Metaverse, but history tells us that it will probably not be an incumbent that develops the best products.

There will be some cool/scary stuff coming.


> There will be some cool/scary stuff coming.

Yes.

For AR to work properly, it requires precise location. Apple, Lyft, FB et al are all using visual mapping. Where pictures are processed, rebuilt in 3d and key points extracted.

Those key points can be compared to key points found in a sample image, using some trigonometry you can position that image in 3d (lat, lon, height and angle, so left right relative to north, plus up down and rotation) This mapping is accurate to about 20cm now (in real time)

This means that (future) AR headsets know exactly where you are at all times. Combined with the tech to find where you left your keys (object recognition on steroids) you can find out who owns what pretty quickly, through the use of a dodgy app.

Then there it the stalking. Facial recognition might be built in, so you can remember peoples names better (I suspect there will be at least a cursory attempt at permissions) but if a third party app has access to both position and the raw camera, you know the position of 80% of people within range of the glasses.

AR glasses means that open source will have to confront ethics properly for the first time. You can have an open platform, but it will be the cause of oppression(because some prick will make a woman stalking app, its almost a certainty now.).


> will probably not be an incumbent that develops the best products.

Unfortunately I think an incumbent will buy the company that develops the best products


Yeah I mean the story is already 'Facebook is the leader in augmented reality' not Facebook bought oculus and several leaders in augmented reality. So odd to me how companies, CEOs get credit/reverence from people for things they had no hand in.


I am personally waiting for the first viable consumer device by a major player like Apple/Samsung, with hand gesture support (without additional peripherals like gloves). To me, that is the missing component for the entire sector to do a double-take, with of course at least a few hours of battery life.


Have you seen the gesture-tracking in the Oculus Quest 2? It's incredibly impressive, and it's in a $300 consumer device [^1] that can almost do AR via passthrough video, and easily has hours of battery life. But it does run into a fundamental problem of gesture tracking without gloves, which is that the camera needs to be able to see all of your fingers (unless you get to the point of inferring finger position by looking at the hand muscles, which I don't think we're quite at yet). Often you need the fingers to be away from yourself for operations like twiddling buttons or pointing at things, so if the camera is attached to you you've got an issue.

[^1] Granted, economics of video game consoles aren't quite the same as things that might be industrial tools, mostly because they can be sold at a loss with the expectation of making it up in software licenses.


I recently tried a telepresence software from a start-up that allowed me to control a robot factory arm almost a thousand miles away through Oculus quest.

It definitely had that 'wow' moment for me although the product was not production grade quality yet.

It did open my mind to possibilities I had not considered before around the future of work - mixed feelings - I mean think person in <low cost country> operating a robot arm making a food dish in <higher cost country>. Some ways it can be considered a bit depressing but in others it opens up opportunities that have never existed before.

Its a sort of half way stop between no automation and full automation for certain tasks where real intelligence is needed or cheaper.


Was labour cost arbitrage the best use case they came up with for this amazing technology?


Was the latency noticeable annoying? Or was the robot arm not entirely 'real-time' anyway, so it was not an issue?


No it wasnt - strong internet connectivity on both ends was a must - real time operations.


Following projects seem promising:

* OpenCompute[1]: A Facebook lead initiative to build and design open source hardware, mainly for data centers but also for enterprise.

* RISC-V[2]: An open source ISA and alternative to x86 and ARM.

* Cloud Native Computing Project[3]: An industry collaboration for building cloud infrastructure.

* Teleoperations[4]: Remotely operating machines where humans take over in situations where AI is not sure of the correct action to take. For example, before humans get self-driving vehicles, we may get vehicles that are operated remotely. The vehicle will have cameras that stream the data to operators who could be sitting at an office who in turn send control signals wirelessly to the vehicles. This model could also be applied to drones, humanoid robots etc. More jobs could be sent to low wage economies and workers there could perform work that previously required physical presence in high wage economies.

[1] https://www.opencompute.org/

[2] https://riscv.org/

[3] https://www.cncf.io/

[4] https://en.wikipedia.org/wiki/Teleoperation


I was pleasantly surprised by some of the examples in Google Research themes list from this past year: http://ai.googleblog.com/2022/01/google-research-themes-from...

E-Graphs from Julia are another cool space being explored


Thanks for bringing up E-Graphs! I haven't heard of them before and the field looks quite intriguing. At first I thought that it mostly looks like a re-hash of what database query engines do internally, but reading a bit into it, it looks like there might be some more merit to it, though the literature around it seems sparse.


WebGPU, particularly as a portable way to do compute


100%, was just about to comment this. Combined with WebAssembly, we're going to see a brand new distribution channel emerge to distributing compute intensive apps (especially real-time 3D ones!) in Chrome, Edge, Safari, and Firefox.

Look out Steam, because traditional desktop apps like games and even VR applications will start migrating to the web, because that's where users are. My startup Wonder Interactive is working on exactly this.

https://theimmersiveweb.com/

The metaverse will be born from the open web and emerge as a 3D, immersive internet.


The thing with games is the networking costs are currently to high to stream in a practical sense. I've tried google stadia and xbox cloud play and they are both OK for single player if you can tolerate less than stellar graphics and lag spikes, but completely unusable for any multiplayer. Rubberbanding galore and I am on gigabit fiber. This technology requires telecom infrastructure overhaul to be successful and you can imagine how much inertia will be in place in that industry.


That's not at all my experience, or that of my friends. We play multiplayer games together via streaming (Geforce Now) even though we're in different continents, and it works perfectly fine.


My only gripe is the “open web” part, which is really far from open. We have at most 3 implementations with very unhealthy balance between them.

Especially with the basically impossible to implement from scratch “standard” the web is


Probably off-topic, but the notion or a second brain (not Elon Musk stuff) is, to me, emerging as a problem to solve. For years there has been discussions about GTD, bullet journal, mind maps, etc. To help us deal with the information deluge. Whereas the (primitive) tools available were notebooks, calendar and post-its. The current battle among PKM (Personal Knowledge Management)solution providers sounds, to me, that we are slowly starting to investigate the proper UX for a second brain (I.e a place to store transient and persistent data that you meet in real life and that you want to manage OUTSIDE of your brain).


Wendell from Level1Techs has been talking about some aspects of this recently: https://youtu.be/gVKpMb7IZo0


The theory that big players DO NOT WANT you to deal with your data interlinking in a way that is not their way is ... disturbing.


What about this is high tech? These things are nice, but all systems I know are basically just text files with links between them, no? I'm thinking of roam, org-roam, etc


To me, sitting in front of a computer for a couple of hours organzing your data deluge is NOT an option. The basic question of any new UX is now: can i use it while driving, can i use without a screen, can i use it while cooking, can i talk to it, can it talk to me, can it filter things out for me depending on the context.

But UX aside, the PKM is all about interlinking "stuff" in your own way.

To me it is as important for data consumers (i.e us), as the Web has been at interlinking things for data publishers. And it has severe implication in term of data publication, data identification, tools interaction and general usability.

To take a basic example, I still have to do manual actions when I see the poster of a concert in the real life. I should be able to just take a picture of the poster, or QR-code it, and get everything that is relevant (Spotify links, musicbrainz links, appointment ready to be validated in my calendar, payment ready, etc).


Text to image generation, like GLIDE or DALL-E.


Seconded - the past year or so has been a bit of a step-change in terms of what is possible in this space. See also: GauGAN2, the many CLIP-guided [diffusion/VQGAN/other] image synthesis pipelines in the AI art community.


We can now make 20 Tesla magnets.

https://hackaday.com/2021/09/27/commonwealth-fusions-20-tesl...

Nuclear fusion cometh, and it will change absolutely everything.


Photonic processor is coming out this year, claiming to be up to 10x faster than nvidia A100 in BERT and using 90% less energy - https://lightmatter.co/products/envise/


Oscillatory computing.

Csaba, G., & Porod, W. (2020). Coupled oscillators for computing: A review and perspective. Applied Physics Reviews, 7(1), 011302.


VR - Check out Oculus Quest 2 for a cheap wireless standalone consumer device. If it had better resolution I think it could hit the mainstream. Or maybe it's already happening.


The lack of market demand for VR isn't down to a lack of resolution. It's because people don't want to put a box on their head. There's too much friction for that experience. Most people just want to sit on front of a television to be entertained, where they can get up and move around, and divert their attention to their phone or something else. It's for this reason we couldn't even get people to put on 3D glasses.


Any kid about 13 yr old+ atm is mad for VR. they all love it.

When FB/Meta figure out a way to make it MUST HAVE instead of WANT TO HAVE, they will dominate and the oculus tech will along with it.


Almost comical to draw an analogy to 3D glasses imo - VR is a completely different ball game. VR headset purchases are growing at more than 2x year on year, and the "try it once and forget about it" thing seems to be less and less of a thing based on post-Christmas 2020 & 2021 WebXR usage stats that I've seen.

The virtuous cycle is starting now that big AAA devs are taking notice of the platform. More games => more users => more devs => more games.


It's already happening, the issue is the lack of content.

I've been buying headsets compulsively for the last few years only to replay Half Life: Alyx or Boneworks for the 400th time.

I just got Hitman 3 and it's sad. Borderlands was watered down and not a great experience. There are nearly no multiplayer games. Minecraft is pretty fun but Microsoft Minecraft doesn't have mods.

I am waiting with baited breath for Unity and Unreal to treat VR like first class citizens so it's easy for people to make games.


It’s at a stage with undeniable utility and a possibility to become ubiquitous, but the face hardware problem will need to improve. Ain’t nobody wearing that for a virtual meeting, even though right now the visceral advantages of meeting someone in Oculus are so much better than Zoom.


It definitely is already happening. Especially with the PS5/Xbox shortages, lots of kids got the Quest 2 as their Christmas present this past year. A report back in November has it marked at 10 million units sold, which is pretty significant considering the history of VR adoption.


For web dev, I would say stuff like Liveview / Hotwire.

It makes you faster on developing complex web applications that would otherwise take much longer time with a traditional SPA-framework. You don't have to worry about handling server communications, how to send data and in what format etc, all updates will be handled in a consistent manner and it's scary easy to make live updates to all connected clients.

Of course, these things are getting more popular now and a lot of people on HN may already know about it but I would assume most laymen still have no clue that this is a thing.


Cell reprogramming


This. It will fall like a hammer, and nobody is really seeing it coming. It may seem like I'm trolling, but I think this has the potential to result in global riots in a few decades.


Non industrial tech. These huge massive excesses of computation arent really relevant, compared to the frontiers of computing being a personally possible revolutionary potentiality. Progress is dispirate & not well proven. But big computing is both cutting edge but increasingly irrelevant & botique, not broadly impactful. We will leave our impactlessness shortly, start to explore & find edges that bring us somewhere good, expand our lands.


It’s funny to think that my pet, without having the ability to vocalize a human language or think abstractly, still manages to communicate more effectively than this pile of words arranged into a simulation of ‘deep thought.’


You're saying some without saying much. Willing to expand on this, or provide a link?


Ted Nelson's Computer Lib/Dream Machines is still on point. It's not much more specific than I am, still mostly vague. But I dont think we're at a stage where focusing on specifics would do anything but mislead, would winnow down an idea, would have us thinking about a tree when the plot is about the forrest.


It seems that many of the answers point a general CS area or a trending topic in it, but I'm pretty sure that progress happens in all the areas (see arXiv's CS page [1], for instance), as well as in topics outside of hyped ones, and each area or topic has its "cutting edge" bits (new research).

[1] https://arxiv.org/archive/cs


I'd say High-NA EUV


For anyone wondering what the hell is this:

EUV = extreme ultraviolet lithography

NA = numerical aperture

the NA part makes the EUV part better :)

These next-generation lithography systems will be key to advance Moore’s Law towards the logic 2nm technology generation and beyond.


If only I could buy Zeiss shares.


I think typing biometrics made a lot of significant advancements in the last year. I know that it is something that was in the making for a while but there are some companies out there that are offering some very interesting solutions such as continuous authentication.



In the NLP community, an example of cutting edge research currently happening is token-free transfer Transformers.


WebAssembly


For client-server APIs, I'd say it's using OpenAPI as specification, as opposed to just for documentation and testing.

It feels like it shouldn't be "cutting edge", but it's still not used as much as it should be.


I really like the idea of OpenAPI: use your input and output data models to document your API, and maybe even stub out the framework for your actual code or a language-specific SDK. But the reality is that deviations in auth, data types, etc., end up complicating your OpenAPI specs way more than expected. And the tooling that is supposed to stub out doc sites using your OpenAPI specs is all over the place in terms of what is and isn’t supported, and even how the same supported features are implemented. My team ended up making a custom redoc implementation for our docs site (to include better narrative), but the specs are just for documentation, and don’t inform the application or SDKs at all. Reality is so close to the dream, but in my experience, still so far away.


OpenAPI must die. It allows to describe REST in a quite limiting manner. There's only one way to implement callbacks. There is no Websocket support nor any hint at async communication protocols.

OpenAPI is OK for prototyping, but production-grade performance-tuned software will likely break out of what OpenAPI allows to spec.


> There is no Websocket support nor any hint at async communication protocols.

Both of these things are out of scope for OpenAPI. It's a separate discussion whether they should be or not, but they are currently not. For async protocols, there is AsyncAPI [0], FWIW.

Do you know of an alternative spec format that would cover what you're missing with OpenAPI?


Anything that stands between the programmer and the code is doomed to be abused by those who can't.

The whole concept of specifying an API in a DSL should be limited to the domain and not be standardised.

OTOH, I am all for literate programming that combines the art of writing prose and code like in Jupiter Notebook. Among competitors to OpenAPI I keep an eye on API Blueprint. While it leans more towards prose than code, it is a valuable tool to document APIs.


> Anything that stands between the programmer and the code is doomed to be abused by those who can't.

Those who can't what? If you mean the ability to code, I don't understand how is that related to API specifications.

> The whole concept of specifying an API in a DSL should be limited to the domain and not be standardised.

Why exactly?


On Networking side - eBPF


Neuralink to implant chips in human brains in 2022. what could more edge-cutting?


Dark blockchain


From their about page:

> "Whether it’s your drug sales, IRS fraud earnings, or stacks of gold bars you purchased with stolen credit cards; money matters to us all. Rather than relying on straight edge normie crypto - with their fees, delays, and anti-fraud risk - we stand for blockchain without limits. That’s why we support DarkBlockchain, a global digital cloud currency that only you can control on the dark web. Find out more..."

Maybe it is high tech, I don't know, but all I was thinking is wtf



no it was a joke

Dark Blockchain is a rift on hypes like “dark web” and “blockchain”


It might be a joke, but SNARKs / STARKs really do provide the foundation for a fully anonymous, fully decentralized blockchain. Calling it “dark” appears like one kind of brand value.


DeFi - Decentralized financial instruments that are implemented via Smart Contracts on a Blockchain.


> Decentralized financial instruments that are implemented via Smart Contracts on a Blockchain.

I know its _now_ fashionable to dunk on crypto, but I really wish the whole "smart contracts" thing would be seen for what it is; neither smart nor a contract.

Contracts only work because there is a higher power to appeal to when something goes wrong. (the law courts) That means that when someone does something wrong, the other side can attempt to get justice.

If a smart contract is wrong, you're fucked. Not only do you have to speak lawyer, you also have to understand obfuscated code. Which means getting it right is expensive as the combined skills are exceptionally rare.

I get the aspiration, but its at best misguided.

The common refrain is that actual contracts are not scaleable, and can't be done in real time. This isn't true. HFTs are basically creating millions of contracts a day. When you buy something from amazon, thats a contract. When you use your credit card, thats a contract as well.

Law needs to be reformed, dont get me wrong, but the blockchain aint the thing to do it.


I agree completely, all of these are just tools that are used to interact with each-other. What we do with those tools is the important part, I do see these advanced cryptography methods having a place in society, but the way people ~Most~ people are using them right now is not exciting and very predictable -> pyramid schemes


This seems to get downvoted a lot, and I don't know why. Regardless of whether you think this is a good/useful idea, it's certainly leveraging technology (and scientific knowledge) that has only become available recently.


HFT and currency speculation hasn't been cutting edge since... I don't know 90s? 80s?


It's "cutting edge" in the sense that people are just now getting around to implementing most of the traditional centralized financial systems back again onto popular blockchain technologies.

It's worth calling out specifically because it will not likely last very long in its current state; large, intuitional actors will either jump on it, or it will die an unceremonious death, gobbling up billions of dollars in its wake.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: