The key difference is in the architecture. With the traditional approach, each app is a self contained unit of functionality that slaps its own UI on top. You interact with one app to do one thing, then you have to switch to another to do another, and so on. Crucially, they don’t have any shared context and it’s not possible to compose functionality from different apps together in a meaningful way.
With the WeChat approach, you have a single UI framework, and apps are effectively services that plug into it. Now it’s possible to have a shared context that spans multiple apps, and to pull their functionality into it. It basically facilitates creating workflows that involve multiple apps where each app is a component of the workflow. It’s a similar idea to the way Unix philosophy works where you have a bunch of command line utils and you can pipe data through them in a script composing their individual functionality.
This doesn’t have to be done using a mega app like WeChat, you could bake that into the OS itself, and I think it would actually be a very good architecture to do that. I think that the approach of coupling the UI to the business logic is the wrong way to go. It’s much better to decouple these things, and allow the user to create whatever workflow they want that fits their particular use case leveraging functionality provided by different apps.
Right, I mentioned that you can do this at the OS level in my comment. However, the way iOS does it is not general, it’s something devs have to do on case by case basis. What I’m talking about is the decoupling of the UI from the logic being the default. The OS can present a single unified UI to the user, and the apps just provide service functionality. The app can then add a default view for itself, but the user could adapt it any way they wanted.
I’m not sure I fully understand. Having a pre-made UI would limit what functionality could be implemented. And it sounds like the OS developer making 90% of an app then just letting third parties plug in their back end. Like a white label kind of thing? Or do you mean something more like UIKit/SwiftUI?
No more than the GUI toolkit that the OS already provides. You’d just build UIs like you normally do, and then specify the endpoints that the widgets connect to for the data. The key here is that all apps should be forced to explicitly provide an API layer that the UI component talks to, and that anything you as the user want should be able to talk to that API.
I do too unfortunately. Incidentally, this could even be handled by the GUI toolkit itself since native apps have to rely on it to build the user interface. The toolkit could just automatically generate a JSON API based on that for example.
The key difference is in the architecture. With the traditional approach, each app is a self contained unit of functionality that slaps its own UI on top. You interact with one app to do one thing, then you have to switch to another to do another, and so on. Crucially, they don’t have any shared context and it’s not possible to compose functionality from different apps together in a meaningful way.
With the WeChat approach, you have a single UI framework, and apps are effectively services that plug into it. Now it’s possible to have a shared context that spans multiple apps, and to pull their functionality into it. It basically facilitates creating workflows that involve multiple apps where each app is a component of the workflow. It’s a similar idea to the way Unix philosophy works where you have a bunch of command line utils and you can pipe data through them in a script composing their individual functionality.
This doesn’t have to be done using a mega app like WeChat, you could bake that into the OS itself, and I think it would actually be a very good architecture to do that. I think that the approach of coupling the UI to the business logic is the wrong way to go. It’s much better to decouple these things, and allow the user to create whatever workflow they want that fits their particular use case leveraging functionality provided by different apps.
That already exists.
Right, I mentioned that you can do this at the OS level in my comment. However, the way iOS does it is not general, it’s something devs have to do on case by case basis. What I’m talking about is the decoupling of the UI from the logic being the default. The OS can present a single unified UI to the user, and the apps just provide service functionality. The app can then add a default view for itself, but the user could adapt it any way they wanted.
I’m not sure I fully understand. Having a pre-made UI would limit what functionality could be implemented. And it sounds like the OS developer making 90% of an app then just letting third parties plug in their back end. Like a white label kind of thing? Or do you mean something more like UIKit/SwiftUI?
No more than the GUI toolkit that the OS already provides. You’d just build UIs like you normally do, and then specify the endpoints that the widgets connect to for the data. The key here is that all apps should be forced to explicitly provide an API layer that the UI component talks to, and that anything you as the user want should be able to talk to that API.
Ahh, yeah that would be pretty good, but I doubt it would ever happen in the West.
I do too unfortunately. Incidentally, this could even be handled by the GUI toolkit itself since native apps have to rely on it to build the user interface. The toolkit could just automatically generate a JSON API based on that for example.