antd-extension | Ant Design Extension
kandi X-RAY | antd-extension Summary
kandi X-RAY | antd-extension Summary
Ant Design Extension is an extended set of components for Ant Design that conforms to the Ant Design specification and has visually exposed UI components.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of antd-extension
antd-extension Key Features
antd-extension Examples and Code Snippets
Community Discussions
Trending Discussions on Architecture
QUESTION
I have been studying Software architectures and design in my uni and I am in the design pattern section. I noticed that the adapter pattern implementation looks similarl to the dependency injection that most framework uses such as Symfony, Angular, Vue, React, that we import a class and type hint it in our constructor.
What are their differences or is it the frameworks implementation of Adapter pattern?
...ANSWER
Answered 2022-Mar-09 at 16:41Dependency injection can be used in adapter pattern. So let's go step by step. Let me show what adapter pattern and dependency injection are.
As Wiki says about adapter pattern:
In software engineering, the adapter pattern is a software design pattern (also known as wrapper, an alternative naming shared with the decorator pattern) that allows the interface of an existing class to be used as another interface. It is often used to make existing classes work with others without modifying their source code.
Let's see a real life example. For example, we have a traveller who travels by car.
But sometimes there are places where he cannot go by car. For example, he cannot go by car in forest. But he can go by horse in forest. However, class of Traveller
does not have a way to use Horse
class. So, it is a place where pattern Adapter
can be used.
So let's look how Vehicle
and Tourist
class look like:
QUESTION
I have app architecture issue.
I want to make landing page in something like nextjs as it will need SEO.
And I will make react app which does not need SEO and require login.
My idea is that user can be redirected from landing page to app login page.
But how this should be hosted and even is this good idea?
Should both be hosted on different domains?
ANSWER
Answered 2022-Jan-19 at 17:14Before we start, I do agree with you that these 2 different websites have completely different behaviors, hence demand different handling approaches.
To the point - let's break your question into the followings factors:
- Availability
- Incoming Traffic
- Pricing
Availability
Most chances that your landing page should be served via a CDN
in order to get world-wide coverage, while the app itslef may use a cdn
. Some technical point to consider:
- If that page is also build with a modern stack it also can be hosted via a
cloud-based storage
- You can select a CDN from your hosting cloud provider (such as
aws
,azure
,gcp
, etc.) or use a completely external service (such asmax cdn
). It mainly depends on the technological stack that your website
Incoming Traffic
As landing pages are opened to the general public and using anonymous access
(e.g. - no login is required) they are at a high risk level for ddos
and other malicious attacks. This is why these websites are mostly hosted behind a waf
, gateway
or any other tier that help these websites to protect themselves from being hijacked. Also, in most use-cases, these websites should handle very high loads (way more than the app itself, which is login protected). Some key points:
- Landing page websites loads may change drastically and without any warning.. so they should be deployed in an
elastic high availability
manner which means - when high loads occur - please use more resources to handle these loads (and please automatically decrease them when loads return to normal levels) - In terms of logs - incoming traffic is different when dealing with
identify users
and when handlinganonymous access
- both in terms ofcyber security
as well ofdata analysis
- Apps that require login, will mostly need to use a solid
gateway
and some sort ofidentity management
solution. These parts have no benefit to the landing page in terms of functionality and also - resource usage
Pricing Yes, we want to gain as much flexibility as possible, but we also want to pay the lower price possible as well. Coupling these 2 different apps may cause using expensive resources just to handle landing page loads. On the other hand - decoupling them will allow us to track every resource group and pay only for what we are using. So yes - it's even makes sense in terms of pricing
In short (sort of) - 2 different apps should have 2 different ways of deployments - each with its own technical stack and configurations. That will give us
- Flexibility - change in one environment will not damage the other
- Deployment - 2 different pipelines - each dedicated only to the a single solution
- Pricing - there is no need to waste resources (for example: by over using libraries that consume resources that most time is unused) hence - paying less
- DevOps - in some use-cases - 2 different devOps personnel may handle each pipeline, which may be an advantage
Hope this information helps
QUESTION
My team and I are considering using an authentication SASS.
I am definitely sure that the SASS solution will eventually be more secure than the hand made one (even using proper libs) and in our case we can afford the money.
But the thing that makes me hesitate the most is how this service will discuss with the rest of my app. Will it actually simplify our code or make it a more complicated knot bag in the end?
I understand that user list with credentials, and eventual attributes are stored there.
But then, what should I store in my app's (SQL) DB?
Say I have users
that belong to companies
and other assets
on a 1 - n relationship.
I like to write things like:
...ANSWER
Answered 2022-Jan-17 at 06:17I'm highly confused on the profits of externalizing what's usually the core object of an app. Am I thinking too monolithically? :-D
Yes, you are thinking too monolithically by assuming that you have to write and control all the code. You have asked a question about essentially outsourcing Authentication to an existing SASS based solution, when you could just as easily write your own. This is a common mistaken assumption that many developers make, especially in the area of Security for applications.
- Authentication is a core requirement for many solutions, but it is very rarely a core aspect or feature of the solution.
By writing your own solution to what is a generally standard concept (Authentication) you have to write, test and maintain your logic, including keeping up to date with latest security trends over the lifetime of the product. In terms of direct Profit/Cost:
- Costs you a lot of time and effort to get it right
- Your own solution will add a layer of technical debt, future developers (internal or external) will need to familiarise themselves with your implementation before they can even start maintenance or improvement work
- You are directly assuming all the risks and responsibilities to maintain the security of the solution and its data.
Depending on the type of data and jurisdiction of your application you may be asked down the track to implement multi-factor authentication or to force all users to re-register to adopt stronger security protocols, this can be a lot of effort for your own solution, or a simple tick of a box in the configuration of your Authentication provider.
Business / Data SchemaYou need to be careful to separate the two concepts of Authentication and a User in the business domain. Regardless of where or what methodology you use to Authenticate your users, from a data integrity point of view it is important that there is a User concept in the database to associate related data for each user.
So there is no way around it, your business domain logic requires a table to represent a User
in this business domain.
This User
table should have an arbitrary Primary Key that is specific to the Application domain, and in that table store the token that that is used to map that business user to the Authentication process. Then throughout your model, you can create FK references back to the user table.
In this way it may be possible for you to map users to multiple different providers, or to easily change the provider with minimal or zero impact on the rest of the business domain model.
What is important from a business process point of view is that the application can resolve the correct business User
from the token or claims provided in the response from the authentication provider.
If SSO (Single Sign On) is appealing to you then the choice of which Authentication provider to use can become an issue depending on the nature of your solution and the type of users who will be Authenticating. If the solution is tenanted to other businesses and offers B2B, or B2C focused activities then an Enterprise authentication solution like Azure AD, or Google Cloud Identity might make sense. You would register your product in the client's authentication domain so that they can manage their users and access levels.
If the solution is more public focussed then you might consider other social media Authentication providers as a means of simplifying Authentication for users rather than forcing them to use your own bespoke Authentication process that they will invariably forget their password too...
You haven't mentioned what language or runtime you are considering, however if you do choose to write your own Authentication service, as a bare minimum you should consider implementing an OAuth 2.0 implementation to ensure that your solution adheres to standard practises and is compatible with other providers chould you choose to use them later.
In a .NET based environment I would suggest Identity Server 4 as a base level of security, there are a lot of resources on implementation, other frameworks should have similar projects or providers that you can host yourself. The point is that by using a standard implementation of your own Authentication Service, rather than writing your own one that is integrated into your software you are not re-inventing anything, there is a lot of commercial and community support available to help you minimise the effort and cost to get things up and running.
ConclusionUltimately, if you are concerned with Profit, and lets face it most of us are, then the idea that you would re-create the wheel, just because you can adds a direct implementation and long term maintenance Cost and so will directly reduce Profitability, especially when the effort to implement existing Authentication providers into your solution is really low.
Even if you choose today to implement your own Authentication Service, it would be wise to implement it in such a way that you could easily offload that workload to an external provider, it is the natural evolution of security for small to mid sized applications when users start to demand more stringent security requirements or additional features than it is cost effective to provide in your native runtime.
Once security is implemented in your application the rest of the business process generally evolves and we neglect to come back and review authentication until after a breach, if we or the client ever detect such an event, for this reason it is important that we get security as right as we can from the very start of a solution.
Whilst not directly related, this discussion reminds me of my faviourite quote from Eric Lippert in a comment on an SO blog
Eric Lippert on What senior developers can learn from beginners
...The notion that programming can be principled — that we proceed by understanding the abstractions afforded by the language, and then match those abstractions to a model of the business domain of the program — is apparently never taught to a great many programmers. Rather, many programmers proceed as though they’re exploring an undiscovered country, and going down paths more or less at random and hoping they end up somewhere good, no matter how twisted the path is that gets them there...
One of the reasons that we use external Authentication Providers is that the plethroa of developers who have come before us have already learnt the hard lessons on what to do, or not to do and have evolved a set of standards and protocols to provide best practice guidelines on how to protect our users and their data when they are using our software. Many of these external providers represent best practice implementations and they maintain them for us as the standards continue to evolve, so that we don't have to.
QUESTION
I have a repeating dilemma while constructing a class in C++. I'd like to construct the class without propagating its internal dependencies outside.
I know I have options like:
- Use pimpl idiom
- Use forward declaration and only reference or smart pointers in header
ANSWER
Answered 2021-Nov-26 at 18:47The underlying issue is that the consumers of MyNewClass
need to know how big it is (e.g. if it needs to be allocated on the stack), so all members need to be known to be able to correctly calculate the size.
I'll not address the patterns you already described. There are a few more that could be helpful, depending on your use-case.
1. InterfacesCreate a class with only the exposed methods and a factory function to create an instance.
Upsides:
- Completely hides all private members
Downsides:
- requires heap allocation
- lots of virtual function calls
Header:
QUESTION
We have a resource called messages
. We want to have two ways of listing its collection. One would return only messages that are mandatory and have been viewed; the other, all messages. Each one has fields that are not necessary for the other, thus we would like to not return them. E.g.
One response should look like this:
...ANSWER
Answered 2021-Nov-22 at 14:42First of all, if it's messages, then /messages
should be there because the resource is message.
After that, messages/mandatories
means that there a listing of mandatories, that it's a subset of messages. Do you intend to add or update a mandatory message with put, post or patch, messages/mandatories
is the right way.
But if it's only a filter for messages that are mandatory, the best way to do it is with a GET
like this: /messages?status=mandatories
Where status
is the field that indicate if the message is mandatory
QUESTION
I have read lots of posts about using Python gettext
, but none of them addressed the issue of changing languages at runtime.
Using gettext
, strings are translated by the function _()
which is added globally to builtins
. The definition of _
is language-specific and will change during execution when the language setting changes. At certain points in the code, I need strings in an object to be translated to a certain language. This happens by:
- (Re)define the
_
function inbuiltins
to translate to the chosen language - (Re)evaluate the desired object using the new
_
function - guaranteeing that any calls to_
within the object definition are evaluated using the current definition of_
. - Return the object
I am wondering about different approaches to step 2. I thought of several but they all seem to have fundamental flaws.
- What is the best way to achieve step 2 in practice?
- Is it theoretically possible to achieve step 2 for any arbitrary object, without knowledge of its implementation?
If all translated text is defined in functions that can be called in step 2, then it's straightforward: calling the function will evaluate using the current definition of _
. But there are lots of situations where that's not the case, for instance, translated strings could be module-level variables evaluated at import time, or attributes evaluated when instantiating an object.
Minimal example of this problem with module-level variables is here.
Re-evaluation Manually reload modulesModule-level variables can be re-evaluated at the desired time using importlib.reload
. This gets more complicated if the module imports another module that also has translated strings. You have to reload every module that's a (nested) dependency.
With knowledge of the module's implementation, you can manually reload the dependencies in the right order: if A imports B,
...ANSWER
Answered 2021-Nov-10 at 03:49The only plausible, general approach is to rewrite all relevant code to not only use _
to request translation but to never cache the result. That’s not a fun idea and it’s not a new idea—you already list Refactoring and Deferred translation that rely on the cooperation of the gettext
clients—but it is the “best way […] in practice”.
You can try to do a super-reload
by removing many things from sys.modules
and then doing a real reimport. This approach avoids understanding the import relationships, but works only if the relevant modules are all written in Python and you can guarantee that the state of your program will retain no references to any objects (including types and modules) that used the old language. (I’ve done this, but only in a context where the overarching program was a sort of supervisor utterly uninterested in the features of the discarded modules.)
You can try to walk the whole object graph and replace the strings, but even aside from the intrinsic technical difficulty of such an algorithm (consider __slots__
in base classes and co_consts
for just the mildest taste), it would involve untranslating them, which changes from hard to impossible when some sort of transformation has already been performed. That transformation might just be concatenating the translated strings, or it might be pre-substituting known values to format, or padding the string, or storing a hash of it: it’s certainly undecidable in general. (I’ve done this too for other data types, but only with data constructed by a file reader whose output used known, simple structures.)
Any approach based on partial reevaluation combines the problems of the methods above.
The only other possible approach is a super-LazyString
that refuses to translate for longer by implementing operations like +
to return objects that encode the transformations to eventually apply, but it’s impossible to know when to force those operations unless you control all mechanisms used to display or transmit strings. It’s also impossible to defer past, say, if len(_("…"))>80:
.
QUESTION
I needed to get the root item of a TreeView. The obvious way to get it is to use the getRoot() on the TreeView. Which I use.
I like to experiment, and was wondering if I can get same root, buy climbing up the tree from a leaf item (a TreeItem), using recursively getParent() until the result is NULL.
It is working as well, and, in my custom TreeItem, I added a public method 'getRoot()' to play around with it. Thus finding out this method does already exist in parent TreeItem, but is not exposed.
My question : Why would it not be exposed ? Is is a bad practice regarding OOP / MVC architecture ?
...ANSWER
Answered 2021-Nov-06 at 22:57The reason for the design is summed up by kleopatra's comment:
Why would it not be exposed I would pose it the other way round: why should it? It's convenience api at best, easy to implement by clients, not really needed - adding such to a framework/toolkit tends to exploding api/implementation to maintain.
JavaFX is filled with decisions like this on purpose. A lot of the reasoning is based on experience (good and bad) from AWT/Spring. Just some examples:
For specifying execution on the UI thread, there is a runLater API, but no invokeAndWait API like Swing, even though it would be easy for the framework to provide such an API and it has been requested.
- Providing an invokeAndWait API means that naive (and experienced :-) developers could use it incorrectly to accidentally deadlock threads.
Lots of classes are final and not extensible.
- Sometimes developers want to extend classes, but can't because they are final. This means that they can't over-ride a lot of the built-in tested functionality of the framework and accidentally break it that way. Instead they can usually use aggregation over inheritance to do what they need. The framework forces them to do so in order to protect itself and them.
Color objects are immutable.
- Immutable objects in general make stuff easier to maintain.
Native look and feels aren't part of the framework.
- You can still create them if you want, and there are 3rd party libraries that do that, but it doesn't need to be in the core framework.
The application programming interface is single threaded not multi-threaded.
- Because the developers of the framework realized that multi-threaded UI frameworks are a failed dream.
The philosophy was to code to make the 80% use case easier and the the 20% use case (usually) possible, using additional user or 3rd party code, while making it difficult for the user code to accidentally (or intentionally) break the framework. You just stumbled upon one instance of an application of this philosophy.
There are a whole host of catch-phrases that you could use to describe the reason for this design approach. None of them are OOP or MVC specific. The underlying principles have been around far longer than software engineering, they are just approaches towards work and engineering in general. Here are some links if interested:
- You ain't going to need it YAGNI
- Minimal viable product MVP
- Worse-is-better
- Muntzing
- Feature creep prevention
- Keep it simple stupid KISS
- Occam's razor
QUESTION
I am building an application consisting of 3 components, each comprising a GUI part (views, controllers, presenters) and a domain part (use cases and entities). There is one common infrastructure component (database). With this I am trying to adhere to the clean architecture principles: dependencies towards the domain layer, and using dependency inversion to enable a flow of control indicated with the green arrow
There is a specific order in which the components are used (workflow). When a certain state is reached in a component (can be user initiated or after some work has been finished in the domain layer), the next component needs to be initialized and the GUI needs to move to the next view/page. Furthermore, each workflow step produces an output which serves as input for the next step, thereby setting the state of the next component (black dashed arrows from left to right). This data (ID: string, Matrix3D: 4x4 matrix) needs to be communicated across components.
What (and why) is a good solution (i.e., in which layer) to implement workflow logic, i.e., (de)activating components and initiating the transition to a new “view/page”?
E.g. I could add yet another domain component superordinate to all other domain components which deactivates the current component, initializes the next component, and initiates a transition in the view component.
E.g. I could add yet another GUI component superordinate to all other GUI components initiates transition in the views. Once the view is initialized, it could initiate the initialization of the corresponding domain component.
What (and why) is a good solution to communicate data across components?
- E.g., I could communicate the data via the infrastructure layer.
- E.g., I could pass all data that needs to be exchanged between components to the GUI layer (eventhough not all of this data is required in the GUI) and manage data exchange there.
- E.g., I could directly communicate in the domain layer (e.g., using a messaging system).
ANSWER
Answered 2021-Nov-05 at 18:57It sounds like you would need an overarching GUI component that represents the actual workflow and uses instances of those other components. Most UI frameworks would help you with such composition. Since you didn't tell us which framework you are using you'd have to figure that out using the frameworks docs.
Generally make sure you create a loose coupling between parent and children to ensure the subordinate components can be reused independently.
Also another thing you didn't ask for - In your diagram all components use the same DB block in the bottom. Its not clear what that means exactly, but make sure that each component has their own independent persistence, for example separate tables. Any need to communicate state from one component to the next should go through an API of the component owning the data and not through directly shared DB tables.
QUESTION
I'm currently working on a simple 2D grid game. The grid of the game is composed of cells. A cell can contain (among other things) a landmine or a bomb. The classes that relate to my problem are: Player ; Board and Game
This is how my classes are used within each other:
The class Game represents a game in all its details:
...ANSWER
Answered 2021-Oct-25 at 22:43The placeMine()
method belongs in the Game
class.
The architecture of your game is logically divided into (Model-View-Controller):
- Model: Board / Cell
- Controller: Game
- Helper classes: Player (and Mine could be another if it had other properties)
The model contains the game's state. The controller is responsible for managing each turn of the game, and updating the state accordingly. The game's actions are dealt with by the controller. While a Player is the 'owner' of a turn, and perhaps the owner of a cell, etc, it is the Game which is placing a mine on any given turn on behalf of the player, and updating the state accordingly.
QUESTION
REMARK :
- I use term mocking as a general term meaning all kind of test substitutes inluding spies, fakes, mock, stubs and all the rest.
- In writing my question I have only speak of "use cases" in clean architecture, but my question also concerns "Domain Services" in DDD.
Let's go :
I am currently trying to implement DDD and Clean Architecture principles in a new project. However, it's been 1 week that I have been stuck on ---> How to write my unit test for my use case
Here is my problem:In Clean Architecture, when we create a use case (or a Domain service in DDD), it will depend in most cases on a certain number of entities + the rest (repository, api ...)
To write my unit test of my use case, I start with:
- Mock "the rest" of dependencies that interact with the outside (repositories, API ...)
- But next, what should I do with the entities dependencies in my unit test?
Here are the solutions I thought of :
- Solution 1: I'm injecting fake entities
- However, through my reading about unit test best practices, I understand that we should avoid creating mocks as much as possible because they are "Code Smells" and a good design should make it possible to do without them.
- Indeed, mocking my entities implies that I weaken my test. The test will be tightly coupled to my mocked entities.
- In addition, recreating the structure of my entities seems meaningless to me ...
* If my use case uses multiple entity methods: then I should have to recreate the return value of each of those methods.
* If the structure of my entities is complex I end up with complicated fakes to write, therefore my test loses a lot of reliability and there is a more chance that my fake is wrong, rather than my original entity)
* Even worse, if I use a factory, then I will have to make a fake of the factory -> and that fake will have to build a fake entity ...
- Solution 2: I don't mock entities.
- On the other hand, if I do not mock my entities, then I take the way in my opinion into integration tests: testing the interactions between the different entities ...
- Also as specified by some mocking supporters: If I don't mock my dependencies, then even if my tested unit is valid, the test will fail if my dependency causes a bug. This will cause a false alarm signal on my test ...
- Solution 3: Refactoring the production code
- By reading several articles some offer solutions to limit the coupling (Isolate the side effects from the rest of the logic, Isolate the logic from I/O, Use pure functions, Dependency injections, ...) But even by applying all this, a use case will irremediably need these entities to work ... Therefore it is not a solution.
But then how to do? Am I missing something?
How to do a GOOD unit test on a use case (or a service domain in DDD)? : how to manage the entities in a unit test in which the tested unit has entity depenencies ?
To illustrate my question, and to better understand your answers, here is a fictitious example of the problem:
Imagine that I have the following entity:
...ANSWER
Answered 2021-Oct-22 at 04:07You are looking for a "What makes sense?" answer, so I can only tell you what makes sense from my perspective.
First you don't need to mock all dependencies or do you mock string and array list objects in other tests?
My goal is to make my unit tests as fast as possible and easy to setup.
- The tests will be very fast if they operate on pure POJOs. Your use cases use entities and the entities are POJOs thus your use case tests will run fast.
- I mock the repositories that provide entities, because the repositories usually connect the use cases to the external systems, e.g. a database or rest service. These are the systems that make tests slow and are hard to setup.
So to answer your questions...
How to write a GOOD unit test for a use case and how to handle entity dependencies in my test ?
Use Solution 2: I don't mock entities. I usually do that.
In general, what should I do with my entity dependencies in my unit tests?
You can mock them, but it makes your code more complicated. So just use them and make sure that they are plain objects.
In this example above how to write a good unit test for "addHorseUseCase"?
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install antd-extension
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page