OptaPlanner | OptaPlanner applications | Computer Vision library
kandi X-RAY | OptaPlanner Summary
kandi X-RAY | OptaPlanner Summary
OptaPlanner applications optical-optimal -. Optical Optimal is a GUI based application that provides the functionality to backup directories onto several discs or USB flash drives. The objective is to store as many directories as possible into a single discs or USB flash drive with the following criteria. See the Readme file for the optical-optimal project for more detail.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Select a root folder
- Create top level label
- Create table view
- Display the final page
- Returns the hashCode of this file
- Adds the hashCode to the hashCode
- Processes the user input
- Starts the FXML process
- Compare this folder to another folder
- Returns a hashCode of the parameters
- Handler for the user input
- Change the slider into the slider slider
- Compares this object s objective solution
- Opens the dialog
- Compares this object to another
- Explain score of the solver
- Returns a hashCode of the name
OptaPlanner Key Features
OptaPlanner Examples and Code Snippets
Community Discussions
Trending Discussions on OptaPlanner
QUESTION
When OptaPlanner is used in web service, which means an OptaPlanner app is required to solve multiple problems in parallel threads, are there any limitations to prevent OptaPlanner from doing this? Is synchronization required in any OptaPlanner functions? Thanks
...ANSWER
Answered 2021-Jun-11 at 05:50OptaPlanner supports this: it's a common use case.
In a single JVM, look at SolverManager
to solve mulitple datasets of the same use case in parallel. Even if the constraint weights differ per dataset (see ConstraintConfiguration). So even if some datasets disable/enable some of the constraints while others don't.
For different use cases in a single JVM, just create multiple SolverFactory or SolverManager instances. This is uncommon because usually each use case is a different app (= microservice?).
Across multiple JVMs (= pods), there are several good techniques. Our activemq quickstart scales beautifully horizontally. Read Radovan's blog about how the ActiveMQ is used to load balance the work across the solver pods.
QUESTION
I want to use Simulated Annealing in OptaPlanner, but I am a little baffled by the fact that there is only a setting for the initial temperature and not one for the decay rate. What is the reason for this choice?
...ANSWER
Answered 2021-Jun-11 at 05:44The cooldown rate is automatically derived from the timeGradient, which is simply put 0.0 at the start, 0.5 at half the spentTime and 1.0 at all of the spentTime.
But yes, the classic Simulated Annealing method has 2 parameters (starting temperature and cooldown rate). One could implement such an SA pretty easily by copy-pasting the SimulatedAnnealingAcceptor and configuring it in the AcceptorConfig
.
That being said, tuning 2 parameters is a pain for users. That's why OptaPlanner default SA only has 1 parameter that - together with the termination - is translated into the 2 parameters that SA needs.
QUESTION
How can I initialize a ConstraintVerifier in Kotlin without using Drools and Quarkus? I already added the optaplanner-test JAR and the Maven Dependency for Optaplanner 8.6.0.Final and tried it the following way:
...ANSWER
Answered 2021-Jun-07 at 13:18There are several issues with your test. First of all:
QUESTION
I have a question about OptaPlanner constraint stream API. Are the constraint matches only used to calculate the total score and are meant to help the user see how the score results, or is this information used to find a better solution?
With "used to find a better solution" I mean the information is used to get the next move(s) in the local search phase.
So does it matter which planning entity I penalize?
Currently, I am working on an examination scheduler. One requirement is to distribute the exams of a single student optimally. The number of exams per student varies. Therefore, I wrote a cost function that gives a normalized value, indicating how well the student's exams are distributed.
Let's say the examination schedule in the picture has costs of 80. Now, I need to break down this value to the individual exams. There are two different ways to do this:
- Option A: Penalize each of the exams with 10 (10*8 = 80).
- Option B: Penalize each exam according to its actual impact.=> Only the exams in the last week are penalized as the distribution of exams in week one and week two is fine.
Obviously, option B is semantically correct. But does the choice of the option affect the solving process?
...ANSWER
Answered 2021-Jun-03 at 08:40The constraint matches are there to help explain the score to humans. They do not, in any way, affect how the solver moves or what solution you are going to get. In fact, ScoreManager
has the capability to calculate constraint matches after the solver has already finished, or for a solution that's never even been through the solver before.
(Note: constraint matching does affect performance, though. They slow everything down, due to all the object iteration and creation.)
To your second question: Yes, it does matter which entity you penalize. In fact, you want to penalize every entity that breaks your constraints. Ideally it should be penalized more, if it breaks the constraints more than some other entity - this way, you get to avoid score traps.
EDIT based on an edit to the question:
In this case, since you want to achieve fairness per student, I suggest your constraint does not penalize the exam, but rather the student. Per student, group your exams and apply some fairness ConstraintCollector
. If you do it like that, you will be able to create a per-student fairness function and use its value as your penalty.
The OptaPlanner Tennis example shows one way of doing fairness. You may also be interested in a larger fairness discussion on the OptaPlanner blog.
QUESTION
I'm looking at profiling the score calculation in my Optaplanner project to find out if there are any hotspots that would benefit from being optimised. However, visualvm shows most of the time to be taken in the self time of org.drools.modelcompiler.constraints.ConstraintEvaluator$InnerEvaluator$_2.evaluate
. I therefore assume that this method is what actually runs a lot of the constraint's code. What is the best way to find out what specific pieces of code are taking the most time?
ANSWER
Answered 2021-Jun-01 at 13:54The thing to understand about Constraint Streams is that it is not imperative programming, and therefore traditional performance optimization techniques such as code profiling are not going to be very helpful. Instead, I suggest you think of Constraint Streams as SQL - the way to have fast SQL is to think about how your data flows, how you join and what gets indexed.
Recently I wrote a blog post explaining the tricks behind making CS run fast. However, CS is internally interpreted by the Drools engine, and therefore studying it may give you some insights too. Not all insights there are applicable to CS, but if you take a look at drools-metric
, you should be able to see which constraints are comparatively slow. And then it becomes a game of tweaking.
QUESTION
I am trying to use Optaplanner to optimally assign production units to a group of contractors. The goal is to assign the production units so that the real inventory is as close as possible to the optimal inventory. For this, I am using a soft constraint which, in simple terms, follows this basic equation:
This is the basis for my soft constraint score. The purpose of Optaplanner is to minimize this difference. However, I do have a hard constraint that specifies the minimum production units that can be assigned. Whenever this constraint is introduced to the problem, Optaplanner produces less than optimal solutions. It basically assigns all the available production units to one single contractor so as to not break the minimum production quantity constraint.
This ensures that no contractor is assigned a small amount of production units (assigning 0 is ok), but it has the unintended consequence that the soft constraint is not optimized at all. Extending the run time basically just moves the entirety of the production units around between different contractors.
It is not a matter of not having enough production units since I have around 10000 available units while the minimum production quantity is 120. This should be enough to assign it to several contractors and optimize the inventory instead of assigning the 10000 units to one single contractor.
Is there a way to include this hard constraint without Optaplanner completely ignoring the soft constraint?
EDIT: To clarify, the hard constraint is only punishing when the production units are bellow 120. It is not rewarding production units above 120. The max possible hard score is 0.
(For simplicity I said "assign units to contractors" when in reality Optaplanner is "assigning contractors to the units")
...ANSWER
Answered 2021-May-28 at 06:42It's probably stuck in a deep local optima due to lack of smart enough moves to get out of it (and it's too deep for the metaheuristics to get out of it). If you turn on DEBUG (and later TRACE) logging, you'll see the decisions that optaplanner makes.
Try adding pillar change and swap moves:
QUESTION
I created a new file known as "solverConfig.xml" under resources. I changed in application.properties, to write the following: quarkus.optaplanner.solver-config-xml=src/main/resources/solverConfig.xml. However, Quarkus does not recognize the classpath. It says: Invalid quarkus.optap lanner.solverConfigXML property (src/main/resources/solverConfig.xml): that classpath resource does not exist. I followed the response of Optaplanner and Quarkus solver config update. But, it does not work.
The solverConfig.xml is configured as:
...ANSWER
Answered 2021-May-25 at 09:21The src/main/resources
prefix isn't part of the value for that property:
Either don't have a
quarkus.optaplanner.solver-config-xml
property inapplication.properties
, which means it will pick upsrc/main/resources/solverConfig.xml
(recommended, for standardization only)Or set it explicitly to
quarkus.optaplanner.solver-config-xml=solverConfig.xml
to pick upsrc/main/resources/solverConfig.xml
.
PS: solverConfig.xml in Quarkus doesn't need a entityClass, solutionClass or constraintProviderClass. It picks that up automatically.
QUESTION
We are trying to solve a VRP with Optaplanner. The score calculation runs via constraint streams.
Now I have two vehicles (A and B) and want to schedule two jobs (J1 and J2). The construction heuristic (FIRST_FIT_DECREASING) schedules J1 to A and J2 to B, what is correct so far.
Now the two jobs also have an attribute "customer", and I want to assign a penalty if the customer of the two jobs is the same but the vehicles are different.
For this purpose, I have created a constraint in the ConstraintProvider that filters all jobs via groupBy that have the same customer but different vehicles.
If I now switch on the FULL_ASSERT_MODE, an IllegalStateException occurs after scheduling J2, because the score that is calculated incrementally is different from the score for the complete calculation. I suspect this is because the VariableListener, which recalculates the times of the jobs, only tells the ScoreDirector about a change to Job J2 for my shadowvariables and therefore only changes the score part that is related to it.
How can I tell Optaplanner that the score for J1 must also be recalculated? I can't get to job J1 via the VariableListener to tell the ScoreDirector that the score has to be changed here.
Or does this problem require a different approach?
...ANSWER
Answered 2021-May-24 at 12:11This is a problem that is a bit hard to explain fully. TLDR version: constraint streams only react to changes to objects which are coming from either from()
, join()
or ifExists()
. Changes on objects not coming through these statements will not be caught, and therefore causing score corruptions. Longer explanation follows.
Consider a hypothetical Constraint Stream like this:
QUESTION
Need some ideas on how to build a rule in my task assignment project. Assign workers to tasks, each task has a happen location, want a soft constraint to make a worker's next task be as close as possible to the fulfilled task. But in DRL how can I know which task is the worker's previous task? the information is in the Solution class. An example is greatly appreciated. Is there any OptaPlanner example that I can refer to? for me to know how to get values from Solution.
...ANSWER
Answered 2021-May-20 at 07:23There is a task assigning example in the optaplanner-examples module, which shows how to model such a problem. The main idea is that every task points to the next task and to the previous task or the worker. The worker is the first element of such a chain. In this example, one of the goals is to minimize the makespan; your soft constraint about location sounds very similar - instead of penalizing for the amount of time required to complete all the tasks by a single worker, it would focus on the distance between locations associated with each task.
QUESTION
New to OptaPlanner and want to debug the example Task Assigning in Eclipse to learn. Found that the breakpoints are hit only when the code are called by the UI related code, such as
...ANSWER
Answered 2021-May-12 at 07:38A solution like TaskAssigningSolution is planning cloned (see docs for what this is) through reflection on the fields. See FieldAccessingSolutionCloner
. You can write your own solution cloner to avoid that behavior (but that's very error prone to write correctly).
We have a RFE running to support something like accessFieldsThroughGetterSetters=true to be more JDK 17 friendly, which will force OptaPlanner never to use field access for non-public fields.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install OptaPlanner
You can use OptaPlanner like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the OptaPlanner component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page