swagger_meqa | Auto generate and run tests using swagger | REST library
kandi X-RAY | swagger_meqa Summary
Support
Quality
Security
License
Reuse
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
swagger_meqa Key Features
swagger_meqa Examples and Code Snippets
Trending Discussions on REST
Trending Discussions on REST
QUESTION
I am trying to upgrade to React Router v6 (react-router-dom 6.0.1
).
Here is my updated code:
import { BrowserRouter, Navigate, Route, Routes } from 'react-router-dom';
} />
} />
} />
The last Route
is redirecting the rest of paths to /
.
However, I got an error
TS2322: Type '{ render: () => Element; }' is not assignable to type 'IntrinsicAttributes & (PathRouteProps | LayoutRouteProps | IndexRouteProps)'. Property 'render' does not exist on type 'IntrinsicAttributes & (PathRouteProps | LayoutRouteProps | IndexRouteProps)'.
However, based on the doc, it does have render
for Route
. How to use it correctly?
ANSWER
Answered 2022-Mar-18 at 18:41I think you should use the no match route approach.
Check this in the documentation.
https://reactrouter.com/docs/en/v6/getting-started/tutorial#adding-a-no-match-route
import { BrowserRouter, Navigate, Route, Routes } from 'react-router-dom';
} />
} />
}
/>
To keep the history clean, you should set replace
prop. This will avoid extra redirects after the user click back. Thanks @Paul for this tip.
QUESTION
Per [intro.object]/2:
[..] An object that is not a subobject of any other object is called a complete object [..].
So consider this snippet of code:
struct Base {};
struct Derived : Base {};
struct MostDerived : Derived {};
I can't understand the wording in this quote from the standard:
If a complete object, a member subobject, or an array element is of class type, its type is considered the most derived class [..] An object of a most derived class type or of a non-class type is called a most derived object.
From the quote what I understand is that a type of a complete object is of "most-derived" class type. I stopped here, I really do not understand the rest of the wording.
Per the question "What does the "most derived object" mean?" I think that (correct me if I am wrong), objects of type "most-derived" class only, like
MostDerived
, are called "most-derived" objects. Is this true?If I have created an object of
Base
like this:Base b_obj = Base()
, is the objectb_obj
a "most-derived" object?If I have created an object of
Derived
like this:Derived d_obj = Derived()
, is the objectd_obj
also a "most-derived" object?Does the word "derived" in "most-derived" mean that the object is an object of a class like
MostDerived
, or mean that the object has no class subobject in it?
ANSWER
Answered 2022-Mar-21 at 00:32- An object is not a class.
- An object is an instantiation of a class, an array, or built-in-type.
- Subobjects are class member objects, array elements, or base classes of an object.
- Derived objects (and most-derived objects) only make sense in the context of class inheritance.
void foo() {
int i = 0; // complete object, but not most-derived (not class type)
}
class A {
int i = 0; // non complete object, not most-derived
}
void bar() {
A a; // complete object, but not derived, so can't be "most derived"
}
class B : A { }
void biz() {
B b; // complete object, derived object, and most-derived object
}
Is every "complete" object is "most-derived" object
No. A most-derived object is an object of a most-derived class, and a most-derived class must be of a class type. Objects may be of class type, but non-class type objects also exist.
- Every complete object of class-type is a most-derived object only if that class inherits.
- A most-derived object may be a subobject, so you cannot infer object completeness from most-derivedness (however, you can infer that the most-derived object is of class type).
So if I have created an object of Base like this:
Base b_obj = Base()
, Is the objectb_obj
is "most-derived" object?
Yes. The most-derived object of b_obj
is an object of type Base
. This is not necessarily a complete object, however, since this could be a class member definition. Again, complete is not synonymous with most-derived.
Also if I have created an object of Derived like this:
Derived d_obj = Derived()
, Is the objectd_obj
is also a "most-derived" object?
Yes. The most-derived object of d_obj
is an object of type Derived
.
If you have an object created as type MostDerived
:
MostDerived md;
- It is an object of type
MostDerived
- It is an object of type
Derived
- It is an object of type
Base
- If it is not a member subobject, then it is a complete object
- Its most-derived object is of type
MostDerived
- It has a subobject of type
Derived
, which is neither a complete object nor a most-derived object - Its subobject of type
Derived
has a subobject of typeBase
, which is neither a complete object nor a most-derived object.
QUESTION
I was wondering if there was an easy solution to the the following problem. The problem here is that I want to keep every element occurring inside this list after the initial condition is true. The condition here being that I want to remove everything before the condition that a value is greater than 18 is true, but keep everything after. Example
Input:
p = [4,9,10,4,20,13,29,3,39]
Expected output:
p = [20,13,29,3,39]
I know that you can filter over the entire list through
[x for x in p if x>18]
But I want to stop this operation once the first value above 18 is found, and then include the rest of the values regardless if they satisfy the condition or not. It seems like an easy problem but I haven't found the solution to it yet.
ANSWER
Answered 2022-Feb-05 at 19:59You can use itertools.dropwhile
:
from itertools import dropwhile
p = [4,9,10,4,20,13,29,3,39]
p = dropwhile(lambda x: x <= 18, p)
print(*p) # 20 13 29 3 39
In my opinion, this is arguably the easiest-to-read version. This also corresponds to a common pattern in other functional programming languages, such as dropWhile (<=18) p
in Haskell and p.dropWhile(_ <= 18)
in Scala.
Alternatively, using walrus operator (only available in python 3.8+):
exceeded = False
p = [x for x in p if (exceeded := exceeded or x > 18)]
print(p) # [20, 13, 29, 3, 39]
But my guess is that some people don't like this style. In that case, one can do an explicit for
loop (ilkkachu's suggestion):
for i, x in enumerate(p):
if x > 18:
output = p[i:]
break
else:
output = [] # alternatively just put output = [] before for
QUESTION
I have run in to an odd problem after converting a bunch of my YAML pipelines to use templates for holding job logic as well as for defining my pipeline variables. The pipelines run perfectly fine, however I get a "Some recent issues detected related to pipeline trigger." warning at the top of the pipeline summary page and viewing details only states: "Configuring the trigger failed, edit and save the pipeline again."
The odd part here is that the pipeline works completely fine, including triggers. Nothing is broken and no further details are given about the supposed issue. I currently have YAML triggers overridden for the pipeline, but I did also define the same trigger in the YAML to see if that would help (it did not).
I'm looking for any ideas on what might be causing this or how I might be able to further troubleshoot it given the complete lack of detail that the error/warning provides. It's causing a lot of confusion among developers who think there might be a problem with their builds as a result of the warning.
Here is the main pipeline. the build repository is a shared repository for holding code that is used across multiple repos in the build system. dev.yaml contains dev environment specific variable values. Shared holds conditionally set variables based on the branch the pipeline is running on.
name: ProductName_$(BranchNameLower)_dev_$(MajorVersion)_$(MinorVersion)_$(BuildVersion)_$(Build.BuildId)
resources:
repositories:
- repository: self
- repository: build
type: git
name: Build
ref: master
# This trigger isn't used yet, but we want it defined for later.
trigger:
batch: true
branches:
include:
- 'dev'
variables:
- template: YAML/variables/shared.yaml@build
- template: YAML/variables/dev.yaml@build
jobs:
- template: ProductNameDevJob.yaml
parameters:
pipelinePool: ${{ variables.PipelinePool }}
validRef: ${{ variables.ValidRef }}
Then this is the start of the actual job yaml. It provides a reusable definition of the job that can be used in more than one over-arching pipeline:
parameters:
- name: dependsOn
type: object
default: {}
- name: pipelinePool
default: ''
- name: validRef
default: ''
- name: noCI
type: boolean
default: false
- name: updateBeforeRun
type: boolean
default: false
jobs:
- job: Build_ProductName
displayName: 'Build ProductName'
pool:
name: ${{ parameters.pipelinePool }}
demands:
- msbuild
- visualstudio
dependsOn:
- ${{ each dependsOnThis in parameters.dependsOn }}:
- ${{ dependsOnThis }}
condition: and(succeeded(), eq(variables['Build.SourceBranch'], variables['ValidRef']))
steps:
**step logic here
Finally, we have the variable YAML which conditionally sets pipeline variables based on what we are building:
variables:
- ${{ if or(eq(variables['Build.SourceBranch'], 'refs/heads/dev'), eq(variables['Build.SourceBranch'], 'refs/heads/users/ahenderson/azure_devops_build')) }}:
- name: BranchName
value: Dev
** Continue with rest of pipeline variables and settings of each value for each different context.
ANSWER
Answered 2021-Aug-17 at 14:58I think I may have figured out the problem. It appears that this is related to the use of conditionals in the variable setup. While the variables will be set in any valid trigger configuration, it appears that the proper values are not used during validation and that may have been causing the problem. Switching my conditional variables to first set a default value and then replace the value conditionally seems to have fixed the problem.
It would be nice if Microsoft would give a more useful error message here, something to the extent of the values not being found for a given variable, but adding defaults does seem to have fixed the problem.
QUESTION
I'm trying to get multiple label per item on Kendo Column chart Desired layout looks like this
I was able to get only this layout
import { Component } from '@angular/core';
import { groupBy, GroupResult } from '@progress/kendo-data-query';
import { ValueAxisLabels } from '@progress/kendo-angular-charts';
export type TrendItem = {
clientName: string;
periodName: string;
income: number;
};
@Component({
selector: 'my-app',
template: `
`,
})
export class AppComponent {
public valueAxisLabels: ValueAxisLabels = {
font: 'bold 16px Arial, sans-serif',
};
public trendItems: TrendItem[] = [
{
clientName: 'Client1',
periodName: 'Q1 2020',
income: 20,
},
{
clientName: 'Client1',
periodName: 'Q2 2020',
income: 15,
},
{
clientName: 'Client1',
periodName: 'Q3 2020',
income: 35,
},
{
clientName: 'Client1',
periodName: 'Q4 2020',
income: 40,
},
{
clientName: 'Client2',
periodName: 'Q1 2020',
income: 15,
},
{
clientName: 'Client2',
periodName: 'Q2 2020',
income: 20,
},
{
clientName: 'Client2',
periodName: 'Q3 2020',
income: 15,
},
{
clientName: 'Client2',
periodName: 'Q4 2020',
income: 30,
}
];
public categories = (groupBy(this.trendItems, [{ field: 'clientName' }]) as GroupResult[])
.map((e) => e.value);
public groupedTrendsByPeriod = groupBy(this.trendItems, [{ field: 'periodName' }]) as GroupResult[];
public labelVisual(e: { dataItem: TrendItem }) {
return `$${e.dataItem.income}\r\n${e.dataItem.periodName}`;
}
}
You can try this code here.
My current result look like this
So my question is how to display multiple labels per item like on the first picture?
My current obstacles.
- I didn't find a way to add multiple
elements. Only one will be rendered, rest will be ignored.
- I didn't find a way to position labels below column chart. For column chart it's only possible to use "center", "insideBase", "insideEnd", "outsideEnd" options (according to API Reference) but none of them gives me desired position.
ANSWER
Answered 2022-Jan-02 at 08:18I don't think kendo provides any native solution for that but what I can suggest is to:
QUESTION
I got a large list of JSON objects that I want to parse depending on the start of one of the keys, and just wildcard the rest. A lot of the keys are similar, like "matchme-foo"
and "matchme-bar"
. There is a builtin wildcard, but it is only used for whole values, kinda like an else
.
I might be overlooking something but I can't find a solution anywhere in the proposal:
https://docs.python.org/3/whatsnew/3.10.html#pep-634-structural-pattern-matching
Also a bit more about it in PEP-636:
https://www.python.org/dev/peps/pep-0636/#going-to-the-cloud-mappings
My data looks like this:
data = [{
"id" : "matchme-foo",
"message": "hallo this is a message",
},{
"id" : "matchme-bar",
"message": "goodbye",
},{
"id" : "anotherid",
"message": "completely diffrent event"
}, ...]
I want to do something that can match the id without having to make a long list of |
's.
Something like this:
for event in data:
match event:
case {'id':'matchme-*'}: # Match all 'matchme-' no matter what comes next
log.INFO(event['message'])
case {'id':'anotherid'}:
log.ERROR(event['message'])
It's a relatively new addition to Python so there aren't many guides on how to use it yet.
ANSWER
Answered 2021-Dec-17 at 10:43You can use a guard:
for event in data:
match event:
case {'id': x} if x.startswith("matchme"): # guard
print(event["message"])
case {'id':'anotherid'}:
print(event["message"])
Quoting from the official documentation,
Guard
We can add an
if
clause to a pattern, known as a “guard”. If the guard isfalse
, match goes on to try the nextcase
block. Note that value capture happens before the guard is evaluated:
match point:
case Point(x, y) if x == y:
print(f"The point is located on the diagonal Y=X at {x}.")
case Point(x, y):
print(f"Point is not on the diagonal.")
QUESTION
I need to navigate back to the original requested URL after login.
For example, user enters www.example.com/settings
as user is not authenticated, it will navigate to login page www.example.com/login
.
Once authenticated, it should navigate back to www.example.com/settings
automatically.
My original approach with react-router-dom
v5 is quite simple:
const PrivateRoute = ({ isLoggedIn, component: Component, ...rest }) => {
return (
isLoggedIn? (
) : (
)
}
/>
);
};
Can some one tell me how to do that in v6? Thanks in advance
ANSWER
Answered 2021-Dec-15 at 05:41In react-router-dom
v6 rendering routes and handling redirects is quite different than in v5. Gone are custom route components, they are replaced with a wrapper component pattern.
v5 - Custom Route
Takes props and conditionally renders a Route
component with the route props passed through or a Redirect
component with route state holding the current location
.
const CustomRoute = ({ isLoggedIn, ...props }) => {
const location = useLocation();
return isLoggedIn? (
) : (
);
};
...
v6 - Custom Wrapper
Takes props and conditionally renders an Outlet
component for nested Route
components to be rendered into or a Navigate
component with route state holding the current location
.
const CustomWrapper = ({ isLoggedIn, ...props }) => {
const location = useLocation();
return isLoggedIn? (
) : (
)
};
...
} >
} />
QUESTION
I'm trying to test an API endpoint with a patch request to ensure it works.
I'm using APILiveServerTestCase
but can't seem to get the permissions required to patch the item. I created one user (adminuser
) who is a superadmin with access to everything and all permissions.
My test case looks like this:
class FutureVehicleURLTest(APILiveServerTestCase):
def setUp(self):
# Setup users and some vehicle data we can query against
management.call_command("create_users_and_vehicle_data", verbosity=0)
self.user = UserFactory()
self.admin_user = User.objects.get(username="adminuser")
self.future_vehicle = f.FutureVehicleFactory(
user=self.user,
last_updated_by=self.user,
)
self.vehicle = f.VehicleFactory(
user=self.user,
created_by=self.user,
modified_by=self.user,
)
self.url = reverse("FutureVehicles-list")
self.full_url = self.live_server_url + self.url
time = str(datetime.now())
self.form_data = {
"signature": "TT",
"purchasing": True,
"confirmed_at": time,
}
I've tried this test a number of different ways - all giving the same result (403).
I have setup the python debugger in the test, and I have tried actually going to http://localhost:xxxxx/admin/
in the browser and logging in manually with any user but the page just refreshes when I click to login and I never get 'logged in' to see the admin. I'm not sure if that's because it doesn't completely work from within a debugger like that or not.
My test looks like this (using the Requests library):
def test_patch_request_updates_object(self):
data_dict = {
"signature": "TT",
"purchasing": "true",
"confirmed_at": datetime.now().strftime("%m/%d/%Y, %H:%M:%S"),
}
url = self.full_url + str(self.future_vehicle.id) + "/"
client = requests.Session()
client.auth = HTTPBasicAuth(self.admin_user.username, "test")
client.headers.update({"x-test": "true"})
response = client.get(self.live_server_url + "/admin/")
csrftoken = response.cookies["csrftoken"]
# interact with the api
response = client.patch(
url,
data=json.dumps(data_dict),
cookies=response.cookies,
headers={
"X-Requested-With": "XMLHttpRequest",
"X-CSRFTOKEN": csrftoken,
},
)
# RESPONSE GIVES 403 PERMISSION DENIED
fte_future_vehicle = FutureVehicle.objects.filter(
id=self.future_vehicle.id
).first()
# THIS ERRORS WITH '' not equal to 'TT'
self.assertEqual(fte_future_vehicle.signature, "TT")
I have tried it very similarly to the documentation using APIRequestFactory
and forcing authentication:
def test_patch_request_updates_object(self):
data_dict = {
"signature": "TT",
"purchasing": "true",
"confirmed_at": datetime.now().strftime("%m/%d/%Y, %H:%M:%S"),
}
url = self.full_url + str(self.future_vehicle.id) + "/"
api_req_factory = APIRequestFactory()
view = FutureVehicleViewSet.as_view({"patch": "partial_update"})
api_request = api_req_factory.patch(
url, json.dumps(data_dict), content_type="application/json"
)
force_authenticate(api_request, self.admin_user)
response = view(api_request, pk=self.future_assignment.id)
fte_future_assignment = FutureVehicle.objects.filter(
id=self.future_assignment.id
).first()
self.assertEqual(fte_future_assignment.signature, "TT")
If I enter the debugger to look at the responses, it's always a 403
.
The viewset
itself is very simple:
class FutureVehicleViewSet(ModelViewSet):
serializer_class = FutureVehicleSerializer
def get_queryset(self):
queryset = FutureVehicle.exclude_denied.all()
user_id = self.request.query_params.get("user_id", None)
if user_id:
queryset = queryset.filter(user_id=user_id)
return queryset
The serializer is just as basic as it gets - it's just the FutureVehicle
model and all fields.
I just can't figure out why my user won't login - or if maybe I'm doing something wrong in my attempts to patch?
I'm pretty new to Django Rest Framework in general, so any guidances is helpful!
Edit to add - my DRF Settings look like this:
REST_FRAMEWORK = {
"DEFAULT_PAGINATION_CLASS": "rest_framework.pagination.LimitOffsetPagination",
"DATETIME_FORMAT": "%m/%d/%Y - %I:%M:%S %p",
"DATE_INPUT_FORMATS": ["%Y-%m-%d"],
"DEFAULT_AUTHENTICATION_CLASSES": [
# Enabling this it will require Django Session (Including CSRF)
"rest_framework.authentication.SessionAuthentication"
],
"DEFAULT_PERMISSION_CLASSES": [
# Globally only allow IsAuthenticated users access to API Endpoints
"rest_framework.permissions.IsAuthenticated"
],
}
I'm certain adminuser
is the user we wish to login - if I go into the debugger and check the users, they exist as a user. During creation, any user created has a password set to 'test'.
ANSWER
Answered 2021-Dec-11 at 07:34The test you have written is also testing the Django framework logic (ie: Django admin login). I recommend testing your own functionality, which occurs after login to the Django admin. Django's testing framework offers a helper for logging into the admin, client.login
. This allows you to focus on testing your own business logic/not need to maintain internal django authentication business logic tests, which may change release to release.
from django.test import TestCase, Client
def TestCase():
client.login(username=self.username, password=self.password)
However, if you must replicate and manage the business logic of what client.login
is doing, here's some of the business logic from Django:
def login(self, **credentials):
"""
Set the Factory to appear as if it has successfully logged into a site.
Return True if login is possible or False if the provided credentials
are incorrect.
"""
from django.contrib.auth import authenticate
user = authenticate(**credentials)
if user:
self._login(user)
return True
return False
def force_login(self, user, backend=None):
def get_backend():
from django.contrib.auth import load_backend
for backend_path in settings.AUTHENTICATION_BACKENDS:
backend = load_backend(backend_path)
if hasattr(backend, 'get_user'):
return backend_path
if backend is None:
backend = get_backend()
user.backend = backend
self._login(user, backend)
def _login(self, user, backend=None):
from django.contrib.auth import login
# Create a fake request to store login details.
request = HttpRequest()
if self.session:
request.session = self.session
else:
engine = import_module(settings.SESSION_ENGINE)
request.session = engine.SessionStore()
login(request, user, backend)
# Save the session values.
request.session.save()
# Set the cookie to represent the session.
session_cookie = settings.SESSION_COOKIE_NAME
self.cookies[session_cookie] = request.session.session_key
cookie_data = {
'max-age': None,
'path': '/',
'domain': settings.SESSION_COOKIE_DOMAIN,
'secure': settings.SESSION_COOKIE_SECURE or None,
'expires': None,
}
self.cookies[session_cookie].update(cookie_data)
Django client.login: https://github.com/django/django/blob/main/django/test/client.py#L596-L646
QUESTION
In this programming problem, the input is an n
×m
integer matrix. Typically, n
≈ 105 and m
≈ 10. The official solution (1606D, Tutorial) is quite imperative: it involves some matrix manipulation, precomputation and aggregation. For fun, I took it as an STUArray implementation exercise.
I have managed to implement it using STUArray, but still the program takes way more memory than permitted (256MB). Even when run locally, the maximum resident set size is >400 MB. On profiling, reading from stdin seems to be dominating the memory footprint:
Functions readv
and readv.readInt
, responsible for parsing integers and saving them into a 2D list, are taking around 50-70 MB, as opposed to around 16 MB = (106 integers) × (8 bytes per integer + 8 bytes per link).
Is there a hope I can get the total memory below 256 MB? I'm already using Text
package for input. Maybe I should avoid lists altogether and directly read integers from stdin to the array. How can we do that? Or, is the issue elsewhere?
{-# OPTIONS_GHC -O2 #-}
module CF1606D where
import qualified Data.Text as T
import qualified Data.Text.IO as TI
import qualified Data.Text.Read as TR
import Control.Monad
import qualified Data.List as DL
import qualified Data.IntSet as DS
import Control.Monad.ST
import Data.Array.ST.Safe
import Data.Int (Int32)
import Data.Array.Unboxed
solve :: IO ()
solve = do
~[n,m] <- readv
-- 2D list
input <- {-# SCC input #-} replicateM (fromIntegral n) readv
let
ints = [1..]
sorted = DL.sortOn (head.fst) (zip input ints)
(rows,indices) = {-# SCC rows_inds #-} unzip sorted
-- 2D list converted into matrix:
matrix = mat (fromIntegral n) (fromIntegral m) rows
infinite = 10^7
asc x y = [x,x+1..y]
desc x y = [y,y-1..x]
-- Four prefix-matrices:
tlMax = runSTUArray $ prefixMat max 0 asc asc (subtract 1) (subtract 1) =<< matrix
blMin = runSTUArray $ prefixMat min infinite desc asc (+1) (subtract 1) =<< matrix
trMin = runSTUArray $ prefixMat min infinite asc desc (subtract 1) (+1) =<< matrix
brMax = runSTUArray $ prefixMat max 0 desc desc (+1) (+1) =<< matrix
good _ (i,j)
| tlMax!(i,j) < blMin!(i+1,j) && brMax!(i+1,j+1) < trMin!(i,j+1) = Left (i,j)
| otherwise = Right ()
{-# INLINABLE good #-}
nearAns = foldM good () [(i,j)|i<-[1..n-1],j<-[1..m-1]]
ans = either (\(i,j)-> "YES\n" ++ color n (take i indices) ++ " " ++ show j) (const "NO") nearAns
putStrLn ans
type I = Int32
type S s = (STUArray s (Int, Int) I)
type R = Int -> Int -> [Int]
type F = Int -> Int
mat :: Int -> Int -> [[I]] -> ST s (S s)
mat n m rows = newListArray ((1,1),(n,m)) $ concat rows
prefixMat :: (I->I->I) -> I -> R -> R -> F -> F -> S s -> ST s (S s)
prefixMat opt worst ordi ordj previ prevj mat = do
((ilo,jlo),(ihi,jhi)) <- getBounds mat
pre <- newArray ((ilo-1,jlo-1),(ihi+1,jhi+1)) worst
forM_ (ordi ilo ihi) $ \i-> do
forM_ (ordj jlo jhi) $ \j -> do
matij <- readArray mat (i,j)
prei <- readArray pre (previ i,j)
prej <- readArray pre (i, prevj j)
writeArray pre (i,j) (opt (opt prei prej) matij)
return pre
color :: Int -> [Int] -> String
color n inds = let
temp = DS.fromList inds
colors = [if DS.member i temp then 'B' else 'R' | i<-[1..n]]
in colors
readv :: Integral t => IO [t]
readv = map readInt . T.words <$> TI.getLine where
readInt = fromIntegral . either (const 0) fst . TR.signed TR.decimal
{-# INLINABLE readv #-}
main :: IO ()
main = do
~[n] <- readv
replicateM_ n solve
Quick description of the code above:
- Read
n
rows each havingm
integers. - Sort the rows by their first element.
- Now compute four 'prefix matrices', one from each corner. For top-left and bottom-right corners, it's the prefix-maximum, and for the other two corners, it's the prefix-minimum that we need to compute.
- Find a cell [i,j] at which these prefix matrices satisfy the following condition: top_left [i,j] < bottom_left [i,j] and top_right [i,j] > bottom_right [i,j]
- For rows 1 through i, mark their original indices (i.e. position in the unsorted input matrix) as Blue. Mark the rest as Red.
Sample input and Commands
Sample input: inp3.txt.
Command:
> stack ghc -- -main-is CF1606D.main -with-rtsopts="-s -h -p -P" -rtsopts -prof -fprof-auto CF1606D
> gtime -v ./CF1606D < inp3.txt > outp
...
...
MUT time 2.990s ( 3.744s elapsed) # RTS -s output
GC time 4.525s ( 6.231s elapsed) # RTS -s output
...
...
Maximum resident set size (kbytes): 408532 # >256 MB (gtime output)
> stack exec -- hp2ps -t0.1 -e8in -c CF1606D.hp && open CF1606D.ps
Question about GC: As shown above in the +RTS -s output, GC seems to be taking longer than the actual logic execution. Is this normal? Is there a way to visualize the GC activity over time? I tried making matrices strict but that didn't have any impact.
Probably this is not a functional-friendly problem at all (although I'll be happy to be disproved on this). For example, Java uses GC too but there are lots of successful Java submissions. Still, I want to see how far I can push. Thanks!
ANSWER
Answered 2021-Dec-05 at 11:40Contrary to common belief Haskell is quite friendly with respect to problems like that. The real issue is that the array
library that comes with GHC is total garbage. Another big problem is that everyone is taught in Haskell to use lists where arrays should be used instead, which is usually one of the major sources of slow code and memory bloated programs. So, it is not surprising that GC takes a long time, it is because there is way too much stuff being allocation. Here is a run on the supplied input for the solution provided below:
1,483,547,096 bytes allocated in the heap
566,448 bytes copied during GC
18,703,640 bytes maximum residency (3 sample(s))
1,223,400 bytes maximum slop
32 MiB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 1399 colls, 0 par 0.009s 0.009s 0.0000s 0.0011s
Gen 1 3 colls, 0 par 0.002s 0.002s 0.0006s 0.0016s
TASKS: 4 (1 bound, 3 peak workers (3 total), using -N1)
SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
INIT time 0.001s ( 0.001s elapsed)
MUT time 0.484s ( 0.517s elapsed)
GC time 0.011s ( 0.011s elapsed)
EXIT time 0.001s ( 0.002s elapsed)
Total time 0.496s ( 0.530s elapsed)
The solution provided below uses an array library massiv
, which makes it impossible to submit to codeforces. However, hopefully the goal is to get better at Haskell, rather than get points on some website.
The red-blue matrix can be separated into two stages: read and solve
Read Read the dimensionsIn the main
function we only read total number of arrays and dimensions for each array. Also we print the outcome. Nothing exciting here. (Note that the linked file inp3.txt
has a larger array than the limits defined in the problem: n*m <= 10^6
)
import Control.Monad.ST
import Control.Monad
import qualified Data.ByteString as BS
import Data.Massiv.Array as A hiding (B)
import Data.Massiv.Array.Mutable.Algorithms (quicksortByM_)
import Control.Scheduler (trivialScheduler_)
main :: IO ()
main = do
t <- Prelude.read <$> getLine
when (t < 1 || t > 1000) $ error $ "Invalid t: " ++ show t
replicateM_ t $ do
dimsStr <- getLine
case Prelude.map Prelude.read (words dimsStr) of
-- Test file fails this check: && n * m <= 10 ^ (6 :: Int) -> do
[n, m] | n >= 2 && m > 0 && m <= 5 * 10 ^ (5 :: Int) -> do
mat <- readMatrix n m
case solve mat of
Nothing -> putStrLn "NO"
Just (ix, cs) -> do
putStrLn "YES"
putStr $ foldMap show cs
putStr " "
print ix
_ -> putStrLn $ "Unexpected dimensions: " ++ show dimsStr
Loading the input into array is the major source of problems int the original question:
- there is no need to rely on
text
, ascii characters is the only valid input expected by the problem. - input is read into a list of lists. That list of lists is the real source of the memory overhead.
- Sorting lists is ridiculously slow and memory hungry.
Normally in such situation it would be much better to read input in a streaming fashion using something like conduit
. In particular, reading input as stream of bytes and parsing those bytes as numbers would be the optimal solution. That being said there are hard requirements on the width of each array in the description of the problem, so we can get away with reading input line-by-line as a ByteString
and then parsing numbers (assumed unsigned for simplicity) in each line and write those numbers into array at the same time. This ensures that at this stage we will only have allocated the resulting array and a single line as sequence of bytes. This could be done cleaner with a parsing library like attoparsec
, but problem is simple enough to just do it adhoc.
type Val = Word
readMatrix :: Int -> Int -> IO (Matrix P Val)
readMatrix n m = createArrayS_ (Sz2 n m) readMMatrix
readMMatrix :: MMatrix RealWorld P Val -> IO ()
readMMatrix mat =
loopM_ 0 (< n) (+ 1) $ \i -> do
line <- BS.getLine
--- ^ reads at most 10Mb because it is known that input will be at most
-- 5*10^5 Words: 19 digits max per Word and one for space: 5*10^5 * 20bytes
loopM 0 (< m) (+ 1) line $ \j bs ->
let (word, bs') = parseWord bs
in bs' <$ write_ mat (i :. j) word
where
Sz2 n m = sizeOfMArray mat
isSpace = (== 32)
isDigit w8 = w8 >= 48 && w8 <= 57
parseWord bs =
case BS.uncons bs of
Just (w8, bs')
| isDigit w8 -> parseWordLoop (fromIntegral (w8 - 48)) bs'
| otherwise -> error $ "Unexpected byte: " ++ show w8
Nothing -> error "Unexpected end of input"
parseWordLoop !acc bs =
case BS.uncons bs of
Nothing -> (acc, bs)
Just (w8, bs')
| isSpace w8 -> (acc, bs')
| isDigit w8 -> parseWordLoop (acc * 10 + fromIntegral (w8 - 48)) bs'
| otherwise -> error $ "Unexpected byte: " ++ show w8
This is the step where we implement the actual solution. Instead of going into trying to fix the solution provided in this SO question I went on and translated the C++ solution that was linked in the question instead. Reason I went that route is twofold:
- C++ soluition is highly imperative and I wanted to demonstrate that imperative array manipulations are not that foreign to Haskell, so I tried to create a translation that was as close as possible.
- I knew that solution works
Note, that it should be possible to rewrite the solution below with array
package, because in the end all that is needed are the read
, write
and allocate
operations.
computeSortBy ::
(Load r Ix1 e, Manifest r' e)
=> (e -> e -> Ordering)
-> Vector r e
-> Vector r' e
computeSortBy f vec =
withLoadMArrayST_ vec $ quicksortByM_ (\x y -> pure $ f x y) trivialScheduler_
solve :: Matrix P Val -> Maybe (Int, [Color])
solve a = runST $ do
let sz@(Sz2 n m) = size a
ord :: Vector P Int
ord = computeSortBy
(\x y -> compare (a ! (y :. 0)) (a ! (x :. 0))) (0 ..: n)
mxl <- newMArray @P sz minBound
loopM_ (n - 1) (>= 0) (subtract 1) $ \ i ->
loopM_ 0 (< m) (+ 1) $ \j -> do
writeM mxl (i :. j) (a ! ((ord ! i) :. j))
when (i < n - 1) $
writeM mxl (i :. j)
=<< max <$> readM mxl (i :. j) <*> readM mxl (i + 1 :. j)
when (j > 0) $
writeM mxl (i :. j)
=<< max <$> readM mxl (i :. j) <*> readM mxl (i :. j - 1)
mnr <- newMArray @P sz maxBound
loopM_ (n - 1) (>= 0) (subtract 1) $ \ i ->
loopM_ (m - 1) (>= 0) (subtract 1) $ \ j -> do
writeM mnr (i :. j) (a ! ((ord ! i) :. j))
when (i < n - 1) $
writeM mnr (i :. j)
=<< min <$> readM mnr (i :. j) <*> readM mnr (i + 1 :. j)
when (j < m - 1) $
writeM mnr (i :. j)
=<< min <$> readM mnr (i :. j) <*> readM mnr (i :. j + 1)
mnl <- newMArray @P (Sz m) maxBound
mxr <- newMArray @P (Sz m) minBound
let goI i
| i < n - 1 = do
loopM_ 0 (< m) (+ 1) $ \j -> do
val <- min (a ! ((ord ! i) :. j)) <$> readM mnl j
writeM mnl j val
when (j > 0) $
writeM mnl j . min val =<< readM mnl (j - 1)
loopM_ (m - 1) (>= 0) (subtract 1) $ \j -> do
val <- max (a ! ((ord ! i) :. j)) <$> readM mxr j
writeM mxr j val
when (j < m - 1) $
writeM mxr j . max val =<< readM mxr (j + 1)
let goJ j
| j < m - 1 = do
mnlVal <- readM mnl j
mxlVal <- readM mxl (i + 1 :. j)
mxrVal <- readM mxr (j + 1)
mnrVal <- readM mnr ((i + 1) :. (j + 1))
if mnlVal > mxlVal && mxrVal < mnrVal
then pure $ Just (i, j)
else goJ (j + 1)
| otherwise = pure Nothing
goJ 0 >>= \case
Nothing -> goI (i + 1)
Just pair -> pure $ Just pair
| otherwise = pure Nothing
mAns <- goI 0
Control.Monad.forM mAns $ \ (ansFirst, ansSecond) -> do
resVec <- createArrayS_ @BL (Sz n) $ \res ->
iforM_ ord $ \i ordIx -> do
writeM res ordIx $! if i <= ansFirst then R else B
pure (ansSecond + 1, A.toList resVec)
QUESTION
I'm looking for a way to have all keys / values pair of a nested object.
(For the autocomplete of MongoDB dot notation key / value type)
interface IPerson {
name: string;
age: number;
contact: {
address: string;
visitDate: Date;
}
}
Here is what I want to achieve, to make it becomes:
type TPerson = {
name: string;
age: number;
contact: { address: string; visitDate: Date; }
"contact.address": string;
"contact.visitDate": Date;
}
In this answer, I can get the key with Leaves
. So it becomes 'name' | 'age' | 'contact.address' | 'contact.visitDate'
.
And in another answer from @jcalz, I can get the deep, related value type, with DeepIndex
.
Is it possible to group them together, to become type like TPerson
?
When I start this question, I was thinking it could be as easy as something like [K in keyof T]: T[K];
, with some clever transformation. But I was wrong. Here is what I need:
So the interface
interface IPerson {
contact: {
address: string;
visitDate: Date;
}[]
}
becomes
type TPerson = {
[x: `contact.${number}.address`]: string;
[x: `contact.${number}.visitDate`]: Date;
contact: {
address: string;
visitDate: Date;
}[];
}
No need to check for valid number
, the nature of Array / Index Signature should allow any number of elements.
The interface
interface IPerson {
contact: [string, Date]
}
becomes
type TPerson = {
[x: `contact.0`]: string;
[x: `contact.1`]: Date;
contact: [string, Date];
}
Tuple should be the one which cares about valid index numbers.
3. Readonlyreadonly
attributes should be removed from the final structure.
interface IPerson {
readonly _id: string;
age: number;
readonly _created_date: Date;
}
becomes
type TPerson = {
age: number;
}
The use case is for MongoDB, the _id
, _created_date
cannot be modified after the data has been created. _id: never
is not working in this case, since it will block the creation of TPerson
.
interface IPerson {
contact: {
address: string;
visitDate?: Date;
}[];
}
becomes
type TPerson = {
[x: `contact.${number}.address`]: string;
[x: `contact.${number}.visitDate`]?: Date;
contact: {
address: string;
visitDate?: Date;
}[];
}
It's sufficient just to bring the optional flags to transformed structure.
5. Intersectioninterface IPerson {
contact: { address: string; } & { visitDate: Date; }
}
becomes
type TPerson = {
[x: `contact.address`]: string;
[x: `contact.visitDate`]?: Date;
contact: { address: string; } & { visitDate: Date; }
}
The interface
interface IPerson {
birth: Date;
}
becomes
type TPerson = {
birth: Date;
}
not
type TPerson = {
age: Date;
"age.toDateString": () => string;
"age.toTimeString": () => string;
"age.toLocaleDateString": {
...
}
We can give a list of Types to be the end node.
Here is what I don't need:- Union. It could be too complex with it.
- Class related keyword. No need to handle keywords ex: private / abstract .
- All the rest I didn't write it here.
ANSWER
Answered 2021-Dec-02 at 09:30In order to achieve this goal we need to create permutation of all allowed paths. For example:
type Structure = {
user: {
name: string,
surname: string
}
}
type BlackMagic= T
// user.name | user.surname
type Result=BlackMagic
Problem becomes more interesting with arrays and empty tuples.
Tuple, the array with explicit length, should be managed in this way:
type Structure = {
user: {
arr: [1, 2],
}
}
type BlackMagic = T
// "user.arr" | "user.arr.0" | "user.arr.1"
type Result = BlackMagic
Logic is straitforward. But how we can handle number[]
? There is no guarantee that index 1
exists.
I have decided to use user.arr.${number}
.
type Structure = {
user: {
arr: number[],
}
}
type BlackMagic = T
// "user.arr" | `user.arr.${number}`
type Result = BlackMagic
We still have 1 problem. Empty tuple. Array with zero elements - []
. Do we need to allow indexing at all? I don't know. I decided to use -1
.
type Structure = {
user: {
arr: [],
}
}
type BlackMagic = T
// "user.arr" | "user.arr.-1"
type Result = BlackMagic
I think the most important thing here is some convention. We can also use stringified `"never". I think it is up to OP how to handle it.
Since we know how we need to handle different cases we can start our implementation. Before we continue, we need to define several helpers.
type Values = T[keyof T]
{
// 1 | "John"
type _ = Values<{ age: 1, name: 'John' }>
}
type IsNever = [T] extends [never] ? true : false;
{
type _ = IsNever // true
type __ = IsNever // false
}
type IsTuple =
(T extends Array ?
(T['length'] extends number
? (number extends T['length']
? false
: true)
: true)
: false)
{
type _ = IsTuple<[1, 2]> // true
type __ = IsTuple // false
type ___ = IsTuple<{ length: 2 }> // false
}
type IsEmptyTuple> = T['length'] extends 0 ? true : false
{
type _ = IsEmptyTuple<[]> // true
type __ = IsEmptyTuple<[1]> // false
type ___ = IsEmptyTuple // false
}
I think naming and tests are self explanatory. At least I want to believe :D
Now, when we have all set of our utils, we can define our main util:
/**
* If Cache is empty return Prop without dot,
* to avoid ".user"
*/
type HandleDot<
Cache extends string,
Prop extends string | number
> =
Cache extends ''
? `${Prop}`
: `${Cache}.${Prop}`
/**
* Simple iteration through object properties
*/
type HandleObject = {
[Prop in keyof Obj]:
// concat previous Cacha and Prop
| HandleDot
// with next Cache and Prop
| Path>
}[keyof Obj]
type Path =
// if Obj is primitive
(Obj extends PropertyKey
// return Cache
? Cache
// if Obj is Array (can be array, tuple, empty tuple)
: (Obj extends Array
// and is tuple
? (IsTuple extends true
// and tuple is empty
? (IsEmptyTuple extends true
// call recursively Path with `-1` as an allowed index
? Path>
// if tuple is not empty we can handle it as regular object
: HandleObject)
// if Obj is regular array call Path with union of all elements
: Path>)
// if Obj is neither Array nor Tuple nor Primitive - treat is as object
: HandleObject)
)
// "user" | "user.arr" | `user.arr.${number}`
type Test = Extract, string>
There is small issue. We should not return highest level props, like user
. We need paths with at least one dot.
There are two ways:
- extract all props without dots
- provide extra generic parameter for indexing the level.
Two options are easy to implement.
Obtain all props with dot (.)
:
type WithDot = T extends `${string}.${string}` ? T : never
While above util is readable and maintainable, second one is a bit harder. We need to provide extra generic parameter in both Path
and HandleObject
. See this example taken from other question / article:
type KeysUnion =
T extends PropertyKey ? Cache : {
[P in keyof T]:
P extends string
? Cache extends ''
? KeysUnion
: Level['length'] extends 1 // if it is a higher level - proceed
? KeysUnion
: Level['length'] extends 2 // stop on second level
? Cache | KeysUnion
: never
: never
}[keyof T]
Honestly, I don't think it will be easy for any one to read this.
We need to implement one more thing. We need to obtain a value by computed path.
type Acc = Record
type ReducerCallback =
El extends keyof Accumulator ? Accumulator[El] : Accumulator
type Reducer<
Keys extends string,
Accumulator extends Acc = {}
> =
// Key destructure
Keys extends `${infer Prop}.${infer Rest}`
// call Reducer with callback, just like in JS
? Reducer>
// this is the last part of path because no dot
: Keys extends `${infer Last}`
// call reducer with last part
? ReducerCallback
: never
{
type _ = Reducer<'user.arr', Structure> // []
type __ = Reducer<'user', Structure> // { arr: [] }
}
You can find more information about using Reduce
in my blog.
Whole code:
type Structure = {
user: {
tuple: [42],
emptyTuple: [],
array: { age: number }[]
}
}
type Values = T[keyof T]
{
// 1 | "John"
type _ = Values<{ age: 1, name: 'John' }>
}
type IsNever = [T] extends [never] ? true : false;
{
type _ = IsNever // true
type __ = IsNever // false
}
type IsTuple =
(T extends Array ?
(T['length'] extends number
? (number extends T['length']
? false
: true)
: true)
: false)
{
type _ = IsTuple<[1, 2]> // true
type __ = IsTuple // false
type ___ = IsTuple<{ length: 2 }> // false
}
type IsEmptyTuple> = T['length'] extends 0 ? true : false
{
type _ = IsEmptyTuple<[]> // true
type __ = IsEmptyTuple<[1]> // false
type ___ = IsEmptyTuple // false
}
/**
* If Cache is empty return Prop without dot,
* to avoid ".user"
*/
type HandleDot<
Cache extends string,
Prop extends string | number
> =
Cache extends ''
? `${Prop}`
: `${Cache}.${Prop}`
/**
* Simple iteration through object properties
*/
type HandleObject = {
[Prop in keyof Obj]:
// concat previous Cacha and Prop
| HandleDot
// with next Cache and Prop
| Path>
}[keyof Obj]
type Path =
(Obj extends PropertyKey
// return Cache
? Cache
// if Obj is Array (can be array, tuple, empty tuple)
: (Obj extends Array
// and is tuple
? (IsTuple extends true
// and tuple is empty
? (IsEmptyTuple extends true
// call recursively Path with `-1` as an allowed index
? Path>
// if tuple is not empty we can handle it as regular object
: HandleObject)
// if Obj is regular array call Path with union of all elements
: Path>)
// if Obj is neither Array nor Tuple nor Primitive - treat is as object
: HandleObject)
)
type WithDot = T extends `${string}.${string}` ? T : never
// "user" | "user.arr" | `user.arr.${number}`
type Test = WithDot, string>>
type Acc = Record
type ReducerCallback =
El extends keyof Accumulator ? Accumulator[El] : El extends '-1' ? never : Accumulator
type Reducer<
Keys extends string,
Accumulator extends Acc = {}
> =
// Key destructure
Keys extends `${infer Prop}.${infer Rest}`
// call Reducer with callback, just like in JS
? Reducer>
// this is the last part of path because no dot
: Keys extends `${infer Last}`
// call reducer with last part
? ReducerCallback
: never
{
type _ = Reducer<'user.arr', Structure> // []
type __ = Reducer<'user', Structure> // { arr: [] }
}
type BlackMagic = T & {
[Prop in WithDot, string>>]: Reducer
}
type Result = BlackMagic
This implementation is worth considering
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install swagger_meqa
Use your OpenAPI spec (e.g., petstore.yml) to generate the test plan files.
Pick a test plan file to run.
mqgo generate -d /testdata/ -s /testdata/petstore.yml
mqgo run -d /testdata/ -s /testdata/petstore_meqa.yml -p /testdata/path.yml
Search for meqa in petstore_meqa.yml to see all the tags.
The tags will be more accurate if the OpenAPI spec is more structured (e.g. using #definitions instead of inline Objects) and has more descriptions.
See meqa Format for the meaning of tags and adjust them if a tag is wrong.
If you add or override the meqa tags, you can feed the tagged yaml file into the "mqgo generate" function again to create new test suites.
simple.yml just exercises a few simple APIs to expose obvious issues, such as lack of api keys.
path.yml exercises CRUD patterns grouped by the REST path.
object.yml tries to create an object, then exercises the endpoints that needs the object as an input.
The above are just the starting point as proof of concept. We will add more test patterns if there are enough interest.
The test yaml files can be edited to add in your own test suites. We allow overriding global, test suite and test parameters, as well as chaining output to input parameters. See meqa format for more details.
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page