ridl | RIDL test suite and exploits

 by   vusec C Version: Current License: MPL-2.0

kandi X-RAY | ridl Summary

kandi X-RAY | ridl Summary

ridl is a C library. ridl has no bugs, it has no vulnerabilities, it has a Weak Copyleft License and it has low support. You can download it from GitHub.

This repository contains the following:.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              ridl has a low active ecosystem.
              It has 312 star(s) with 44 fork(s). There are 38 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 16 open issues and 9 have been closed. On average issues are closed in 6 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of ridl is current.

            kandi-Quality Quality

              ridl has no bugs reported.

            kandi-Security Security

              ridl has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              ridl is licensed under the MPL-2.0 License. This license is Weak Copyleft.
              Weak Copyleft licenses have some restrictions, but you can use them in commercial projects.

            kandi-Reuse Reuse

              ridl releases are not available. You will need to build from source code and install.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of ridl
            Get all kandi verified functions for this library.

            ridl Key Features

            No Key Features are available at this moment for ridl.

            ridl Examples and Code Snippets

            No Code Snippets are available at this moment for ridl.

            Community Discussions

            QUESTION

            What is an assisted/assisting load?
            Asked 2020-Apr-28 at 06:41

            The RIDL exploit requires that the attacker trigger a page fault to be able to read stale data from the Line Fill Buffer. But according to About the RIDL vulnerabilities and the "replaying" of loads, an assisted load can also be used.

            That question mentions assisted/assisting loads nine times, but I still couldn't wrap my head around what such a load does or how it is triggered. It's something related to TLBs and "cause a page walk that requires an microcode assist".

            Can someone explain what an assisted/assisting load is, preferably with a worked out example?

            ...

            ANSWER

            Answered 2020-Apr-28 at 06:41

            You left out the rest of the sentence in that quote which explains why a page walk might need a microcode assist: "... causes a page walk that requires an microcode assist (to set the accessed bit in the page table entry).

            The x86 ISA says that reading or writing a page will set the "accessed" bit in the page table entry (PTE) for that mapping, if the bit wasn't already set. OSes can use this to see which pages are actually getting accessed regularly (by clearing the accessed bit and letting HW set it again) so they can decide which pages to page out if they need to free up some physical pages. Same story for the "dirty" bit which lets an OS know if the page needs to be synced back to a file or other backing store, if any. (e.g. how an OS can implement a mmap(MAP_SHARED,PROT_WRITE))

            Page walks to fill TLB entries are pure dedicated hardware, but updating those PTE bits with stores are rare enough that it can be left to microcode; the CPU basically traps to internal microcode and runs it before resuming.

            A similar mechanism is used in some CPUs to handle subnormal aka denormal floating-point results that the hard-wired FPU doesn't handle. This lets the common case (normalized floats) be lower latency.

            Related:

            Perf counter on Intel (on Skylake at least): perf stat -e other_assists.any

            [Number of times a microcode assist is invoked by HW other than FP-assist. Examples include AD (page Access Dirty) and AVX* related assists]

            Triggering assists loads from user-space: I'm not sure what approach is good.

            msync(MS_SYNC) on a file-backed mapping should clear the Dirty bit. IDK if it would clear the Accessed bit. Presumably a fresh file-backed mmap with MAP_POPULATE would have its Accessed bit clear, but be wired into the page table so it wouldn't take a #PF page fault exception. Maybe also works with MAP_ANONYMOUS.

            If you had multiple pages with their Accessed bits clear, you could loop over them to be able to do multiple assisted loads without making an expensive system call in between.

            On Linux kernel 4.12 and later I suspect madvise(MADV_FREE) on private anonymous pages clears the Dirty bit, based on the way the man page describes it. It might also clear the Accessed bit, so a load might also need an assist, IDK.

            MADV_FREE (since Linux 4.5)
            The application no longer requires the pages in the range specified by addr and len. The kernel can thus free these pages, but the freeing could be delayed until memory pressure occurs. For each of the pages that has been marked to be freed but has not yet been freed, the free operation will be canceled if the caller writes into the page. After a successful MADV_FREE operation, any stale data (i.e., dirty, unwritten pages) will be lost when the kernel frees the pages. However, subsequent writes to pages in the range will succeed and then kernel cannot free those dirtied pages, so that the caller can always see just written data. If there is no subsequent write, the kernel can free the pages at any time. Once pages in the range have been freed, the caller will see zero-fill-on-demand pages upon subsequent page references.

            The MADV_FREE operation can be applied only to private anonymous pages (see mmap(2)). In Linux before version 4.12, when freeing pages on a swapless system, the pages in the given range are freed instantly, regardless of memory pressure.

            Or maybe mprotect, or maybe mmap(MAP_FIXED|MAP_POPULATE) a new anonymous page to replace the current page. With MAP_POPULATE it should already be wired into the HW page tables (not needing a soft page-fault on first access). The dirty bit should be clear, and maybe also the Accessed bit.

            A vpmaskmovd store with mask=0 (no actual store) will trigger an assist on a write-protected page, e.g. a lazily-allocated mmap(PROT_READ|PROT_WRITE) page that's only been read, not written. So it's still CoW mapped to a shared physical page of zeros.

            It leaves the page clean, so this can happen every time in a loop over an array if every store has mask=0 to not replace any elements.

            This is a little different from the Accessed / Dirty page-table assists you want. This assist is I think for fault suppression, because it needs to not take a #PF page fault. (The page is actually write protected, not just clean.)

            IDK if that's useful for MDS / RIDL purposes.

            I haven't tested with masked loads from a freshly-allocated mmap(MAP_POPULATE) buffer to see if they take an assist but leave the Accessed bit unset.

            Source https://stackoverflow.com/questions/61452899

            QUESTION

            How do the store buffer and Line Fill Buffer interact with each other?
            Asked 2020-Apr-12 at 18:09

            I was reading the MDS attack paper RIDL: Rogue In-Flight Data Load. They discuss how the Line Fill Buffer can cause leakage of data. There is the About the RIDL vulnerabilities and the "replaying" of loads question that discusses the micro-architectural details of the exploit.

            One thing that isn't clear to me after reading that question is why we need a Line Fill Buffer if we already have a store buffer.

            John McCalpin discusses how the store buffer and Line Fill Buffer are connected in How does WC-buffer relate to LFB? on the Intel forums, but that doesn't really make things clearer to me.

            For stores to WB space, the store data stays in the store buffer until after the retirement of the stores. Once retired, data can written to the L1 Data Cache (if the line is present and has write permission), otherwise an LFB is allocated for the store miss. The LFB will eventually receive the "current" copy of the cache line so that it can be installed in the L1 Data Cache and the store data can be written to the cache. Details of merging, buffering, ordering, and "short cuts" are unclear.... One interpretation that is reasonably consistent with the above would be that the LFBs serve as the cacheline-sized buffers in which store data is merged before being sent to the L1 Data Cache. At least I think that makes sense, but I am probably forgetting something....

            I've just recently started reading up on out-of-order execution so please excuse my ignorance. Here is my idea of how a store would pass through the store buffer and Line Fill Buffer.

            1. A store instruction get scheduled in the front-end.
            2. It executes in the store unit.
            3. The store request is put in the store buffer (an address and the data)
            4. An invalidate read request is sent from the store buffer to the cache system
            5. If it misses the L1d cache, then the request is put in the Line Fill Buffer
            6. The Line Fill Buffer forwards the invalidate read request to L2
            7. Some cache receives the invalidate read and sends its cache line
            8. The store buffer applies its value to the incoming cache line
            9. Uh? The Line Fill Buffer marks the entry as invalid

            Questions
            1. Why do we need the Line Fill Buffer if the store buffer already exists to track outsanding store requests?
            2. Is the ordering of events correct in my description?
            ...

            ANSWER

            Answered 2020-Apr-12 at 18:09

            Why do we need the Line Fill Buffer if the store buffer already exists to track outsanding store requests?

            The store buffer is use to track stores, in order, both before they retire and after they retire but before they commit to the L1 cache. The store buffer conceptually is a totally local thing which doesn't really care about cache misses. The store buffer deals in "units" of individual stores of various sizes. Chips like Intel Skylake have store buffers of 50+ entries.

            The line fill buffers primary deal with both loads and stores that miss in the L1 cache. Essentially, it is the path from the L1 cache to the rest of the memory subsystem and deals in cache line sized units. We don't expect the LFB to get involved if the load or store hits in the L1 cache1. Intel chips like Skylake have many fewer LFB entries, probably 10 to 12.

            Is the ordering of events correct in my description?

            Pretty close. Here's how I'd change your list:

            1. A store instructions gets decoded and split into store-data and store-address uops, which are renamed, scheduled and have a store buffer entry allocated for them.
            2. The store uops execute in any order or simultaneously (the two sub-items can execute in either order depending mostly on which has its dependencies satisfied first).
              1. The store data uop writes the store data into the store buffer.
              2. The store address uop does the V-P translation and writes the address(es) into the store buffer.
            3. At some point when all older instructions have retired, the store instruction retires. This means that the instruction is no longer speculative and the results can be made visible. At this point, the store remains in the store buffer and is called a senior store.
            4. The store now waits until it is at the head of the store buffer (it is the oldest not committed store), at which point it will commit (become globally observable) into the L1, if the associated cache line is present in the L1 in MESIF Modified or Exclusive state. (i.e. this core owns the line)
            5. If the line is not present in the required state (either missing entirely, i.e,. a cache miss, or present but in a non-exclusive state), permission to modify the line and the line data (sometimes) must be obtained from the memory subsystem: this allocates an LFB for the entire line, if one is not already allocated. This is a so-called request for ownership (RFO), which means that the memory hierarchy should return the line in an exclusive state suitable for modification, as opposed to a shared state suitable only for reading (this invalidates copies of the line present in any other private caches).

              An RFO to convert Shared to Exclusive still has to wait for a response to make sure all other caches have invalidated their copies. The response to such an invalidate doesn't need to include a copy of the data because this cache already has one. It can still be called an RFO; the important part is gaining ownership before modifying a line.

            6. In the miss scenario the LFB eventually comes back with the full contents of the line, which is committed to the L1 and the pending store can now commit.

            This is a rough approximation of the process. Some details may differ on some or all chips, including details which are not well understood.

            As one example, in the above order, the store miss lines are not fetched until the store reaches the head of the store queue. In reality, the store subsystem may implement a type of RFO prefetch where the store queue is examined for upcoming stores and if the lines aren't present in L1, a request is started early (the actual visible commit to L1 still has to happen in order, on x86, or at least "as if" in order).

            So the request and LFB use may occur as early as when step 3 completes (if RFO prefetch applies only after a store retires), or perhaps even as early as when 2.2 completes, if junior stores are subject to prefetch.

            As another example, step 6 describes the line coming back from the memory hierarchy and being committed to the L1, then the store commits. It is possible that the pending store is actually merged instead with the returning data and then that is written to L1. It is also possible that the store can leave the store buffer even in the miss case and simply wait in the LFB, freeing up some store buffer entries.

            1 In the case of stores that hit in the L1 cache, there is a suggestion that the LFBs are actually involved: that each store actually enters a combining buffer (which may just be an LFB) prior to being committed to the cache, such that a series of stores targeting the same cache line get combined in the cache and only need to access the L1 once. This isn't proven but in any case it is not really part of the main use of LFBs (more obvious from the fact we can't even really tell if it is happening or not).

            Source https://stackoverflow.com/questions/61129773

            QUESTION

            When use write-through cache policy for pages
            Asked 2020-Apr-09 at 22:08

            I was reading the MDS attack paper RIDL: Rogue In-Flight Data Load. The set pages as write-back, write-through, write-combined or uncacheable and with different experiments determines that the Line Fill Buffer is the cause of the micro-architectural leaks.

            On a tangent: I was aware that memory can be uncacheable, but I assumed that cacheable data was always cached in a write-back cache, i.e. I assumed that the L1, L2 and LLC were always write-back caches.

            I read up on the differences between write-back and write-through caches in my Computer Architecture book. It says:

            Write-through caches are simpler to implement and can use a write buffer that works independently of the cache to update memory. Furthermore, read misses are less expensive because they do not trigger a memory write. On the other hand, write-back caches result in fewer transfers, which allows more bandwidth to memory for I/O devices that perform DMA. Further, reducing the number of transfers becomes increasingly important as we move down the hierarchy and the transfer times increase. In general, caches further down the hierarchy are more likely to use write-back than write-through.

            So a write-through cache is simpler to implement. I can see how that can be an advantage. But if the caching policy is settable by the page table attributes then there can't be an implementation advantage - every cache needs to be able to work in either write-back or write-through.

            Questions
            1. Can every cache (L1, L2, LLC) work in either write-back or write-through mode? So if the page attribute is set to write-through, then they all will be write-through?
            2. Write combining is useful for GPU memory; Uncacheable is good when accessing hardware registers. When should a page be set to write-through? What are the advantages to that?
            3. Are there any write-through caches (if it really is a property of the hardware and not just something that is controlled by the pagetable attributes) or is the trend that all caches are created as write-back to reduce traffic?
            ...

            ANSWER

            Answered 2020-Apr-09 at 22:08

            Can every cache (L1, L2, LLC) work in either write-back or write-through mode?

            In most x86 microarchitectures, yes, all the data / unified caches are (capable of) write-back and used in that mode for all normal DRAM. Which cache mapping technique is used in intel core i7 processor? has some details and links. Unless otherwise specified, the default assumption by anyone talking about x86 is that DRAM pages will be WB.

            AMD Bulldozer made the unconventional choice to use write-through L1d with a small 4k write-combining buffer between it and L2. (https://www.realworldtech.com/bulldozer/8/). This has many disadvantages and is I think widely regarded (in hindsight) as one of several weaknesses or even design mistakes of Bulldozer-family (which AMD fixed for Zen). Note also that Bulldozer was an experiment in CMT instead of SMT (two weak integer cores sharing an FPU/SIMD unit, each with separate L1d caches sharing an L2 cache) https://www.realworldtech.com/bulldozer/3/ shows the system architecture.

            But of course Bulldozer L2 and L3 caches were still WB, the architects weren't insane. WB caching is essential to reduce bandwidth demands for shared LLC and memory. And even the write-through L1d needed a write-combining buffer to allow L2 cache to be larger and slower, thus serving its purpose of sometimes hitting when L1d misses. See also Why is the size of L1 cache smaller than that of the L2 cache in most of the processors?

            Write-through caching can simplify a design (especially of a single-core system), but generally CPUs moved beyond that decades ago. (Write-back vs Write-Through caching?). IIRC, some non-CPU workloads sometimes benefit from write-through caching, especially without write-allocate so writes don't pollute cache. x86 has NT stores to avoid that problem.

            So if the page attribute is set to write-through, then they all will be write-through?

            Yes, every store has to go all the way to DRAM in a page that's marked WT.

            The caches are optimized for WB because that's what everyone uses, but apparently do support passing on the line to outer caches without evicting from L1d. (So WT doesn't turn stores into something like movntps cache-bypassing / evicting stores.)

            When should a page be set to write-through? What are the advantages to that?

            Basically never; (almost?) all CPU workloads do best with WB memory.

            OSes don't even bother to make it easy (or possible?) for user-space to allocate WC or WT DRAM pages. (Although that certainly doesn't prove they're never useful.) e.g. on CPU cache inhibition, I found a link about a Linux patch that never made it into the mainline kernel that added the possibility of mapping a page WT.

            WB, WC, and UC are common for normal DRAM, device memory (especially GPU), and MMIO respectively.

            I have seen at least one paper that benchmarked WT vs. WB vs. UC vs. WC for some workload (googled but didn't find it, sorry). And people testing obscure x86 stuff will sometimes include it for completeness. e.g. The Microarchitecture Behind Meltdown is a good article in general (and related to what you're reading up on).

            One of the few advantages of WT is that stores end up in L3 promptly where loads from other cores can hit. This may possibly be worth the extra cost for every store to that page, especially if you're careful to manually combine your writes into one large 32-byte AVX store. (Or 64-byte AVX512 full-line write.) And of course only use that page for shared data.

            I haven't seen anyone ever recommend doing this, though, and it's not something I've tried. Probably because the extra DRAM bandwidth for writing through L3 as well isn't worth the benefit for most use-cases. But probably also because you might have to write a kernel module to get a page mapped that way.

            Source https://stackoverflow.com/questions/61129142

            QUESTION

            About the RIDL vulnerabilities and the "replaying" of loads
            Asked 2020-Apr-09 at 11:07

            I'm trying to understand the RIDL class of vulnerability.

            This is a class of vulnerabilities that is able to read stale data from various micro-architectural buffers.
            Today the known vulnerabilities exploits: the LFBs, the load ports, the eMC and the store buffer.

            The paper linked is mainly focused on LFBs.

            I don't understand why the CPU would satisfy a load with the stale data in an LFB.
            I can imagine that if a load hits in L1d it is internally "replayed" until the L1d brings data into an LFB signalling the OoO core to stop "replaying" it (since the data read are now valid).

            However I'm not sure what "replay" actually mean.
            I thought loads were dispatched to a load capable port and then recorded in the Load Buffer (in the MOB) and there being eventually hold as needed until their data is available (as signalled by the L1).
            So I'm not sure how "replaying" comes into play, furthermore for the RIDL to work, each attempt to "play" a load should also unblock dependent instructions.
            This seems weird to me as the CPU would need to keep track of which instructions to replay after the load correctly completes.

            The paper on RIDL use this code as an example (unfortunately I had to paste it as an image since the PDF layout didn't allow me to copy it):

            The only reason it could work is if the CPU will first satisfy the load at line 6 with a stale data and then replay it.
            This seems confirmed few lines below:

            Specifically, we may expect two accesses to be fast, not just the one corresponding to the leaked information. After all, when the processor discovers its mistake and restarts at Line 6 with the right value, the program will also access the buffer with this index.

            But I would expect the CPU to check the address of the load before forwarding the data in the LFB (or any other internal buffer).
            Unless the CPU actually executes the load repeatedly until it detect the data loaded is now valid (i.e. replaying).
            But, again, why each attempt would unblock dependent instructions?

            How does exactly the replaying mechanism work, if it even exists, and how this interacts with the RIDL vulnerabilities?

            ...

            ANSWER

            Answered 2020-Apr-09 at 11:07

            replay = being dispatched again from the RS (scheduler). (This isn't a complete answer to your whole question, just to the part about what replays are. Although I think this covers most of it, including unblocking dependent uops.)

            parts of this answer have a misunderstanding about load replays.

            See discussion in chat - uops dependent on a split or cache-miss load get replayed, but not the load itself. (Unless the load depends on itself in a loop, like I had been doing for testing >.<). TODO: fix the rest of this answer and others.

            It turns out that a cache-miss load doesn't just sit around in a load buffer and wake up dependent uops when the data arrives. The scheduler has to re-dispatch the load uop to actually read the data and write-back to a physical register. (And put it on the forwarding network where dependent uops can read it in the next cycle.)

            So L1 miss / L2 hit will result in 2x as many load uops dispatched. (The scheduler is optimistic, and L2 is on-core so the expected latency of an L2 hit is fixed, unlike time for an off-core response. IDK if the scheduler continues to be optimistic about data arriving at a certain time from L3.)

            The RIDL paper provides some interesting evidence that load uops do actually directly interact with LFBs, not waiting for incoming data to be placed in L1d and just reading it from there.

            We can observe replays in practice most easily for cache-line-split loads, because causing that repeatedly is even more trivial than cache misses, taking less code. The counts for uops_dispatched_port.port_2 and port_3 will be about twice as high for a loop that does only split loads. (I've verified this in practice on Skylake, using essentially the same loop and testing procedure as in How can I accurately benchmark unaligned access speed on x86_64)

            Instead of signalling successful completion back to the RS, a load that detects a split (only possible after address-calculation) will do the load for the first part of the data, putting this result in a split buffer1 to be joined with the data from the 2nd cache line the 2nd time the uop dispatches. (Assuming that neither time is a cache miss, otherwise it will take replays for that, too.)

            When a load uop dispatches, the scheduler anticipates it will hit in L1d and dispatches dependent uops so they can read the result from the forwarding network in the cycle the load puts them on that bus.

            If that didn't happen (because the load data wasn't ready), the dependent uops will have to be replayed as well. Again, IIRC this is observable with the perf counters for dispatch to ports.

            Existing Q&As with evidence of uop replays on Intel CPUs:

            Footnote 1:

            We know there are a limited number of split buffers; there's a ld_blocks.no_sr counter for loads that stall for lack of one. I infer they're in the load port because that makes sense. Re-dispatching the same load uop will send it to the same load port because uops are assigned to ports at issue/rename time. Although maybe there's a shared pool of split buffers.

            RIDL:

            Optimistic scheduling is part of the mechanism that creates a problem. The more obvious problem is letting execution of later uops see a "garbage" internal value from an LFB, like in Meltdown.

            http://blog.stuffedcow.net/2018/05/meltdown-microarchitecture/ even shows that meltdown loads in PPro expose various bits of microarchitectural state, exactly like this vulnerability that still exists in the latest processors.

            The Pentium Pro takes the “load value is a don’t-care” quite literally. For all of the forbidden loads, the load unit completes and produces a value, and that value appears to be various values taken from various parts of the processor. The value varies and can be non-deterministic. None of the returned values appear to be the memory data, so the Pentium Pro does not appear to be vulnerable to Meltdown.

            The recognizable values include the PTE for the load (which, at least in recent years, is itself considered privileged information), the 12th-most-recent stored value (the store queue has 12 entries), and rarely, a segment descriptor from somewhere.

            (Later CPUs, starting with Core 2, expose the value from L1d cache; this is the Meltdown vulnerability itself. But PPro / PII / PIII isn't vulnerable to Meltdown. It apparently is vulnerable to RIDL attacks in that case instead.)

            So it's the same Intel design philosophy that's exposing bits of microarchitectural state to speculative execution.

            Squashing that to 0 in hardware should be an easy fix; the load port already knows it wasn't successful so masking the load data according to success/fail should hopefully only add a couple extra gate delays, and be possible without limiting clock speed. (Unless the last pipeline stage in the load port was already the critical path for CPU frequency.)

            So probably an easy and cheap fix in hardware for future CPU, but very hard to mitigate with microcode and software for existing CPUs.

            Source https://stackoverflow.com/questions/56187269

            QUESTION

            Delphi COM object not showing up in .NET
            Asked 2019-Nov-12 at 03:16

            I am trying to create a COM object in Delphi and use it in a C# WPF project.

            I have created a new DLL project in Delphi 10.3, using File -> New -> Other, then Delphi -> Windows -> ActiveX Library. I have created the following IDL in the GUI editor for my *.ridl file:

            ...

            ANSWER

            Answered 2019-Nov-12 at 03:16

            I rebuilt the COM object's interface in the *.ridl Design editor; I tried adding the same interface as a DispInterface just for kicks, that didn't work, and then, deleting the DispInterface and adding back the original Interface seemed to work (+ Refresh, Register, and Save as TLB). Also, I noticed that leaving a Version of 0.0 for my interface may be the reason why it worked. Originally, I tweaked the Version to 1.0 when I first built it out. If someone could shed some light as to why this could happen, I would also be quite intrigued to know!

            Source https://stackoverflow.com/questions/58783776

            QUESTION

            How can i get last letter from edit text and put it in adapter
            Asked 2019-Aug-25 at 10:30

            I have a delete button which deletes text in an EditText, and the value of the EditText is the item clicked in a RecyclerView. Now when I delete text from edit text it disappears from the adapter. Now how can I return it back to adapter after clicking on delete?

            ...

            ANSWER

            Answered 2019-Aug-25 at 10:05

            You could just get the text from EditText:

            Source https://stackoverflow.com/questions/57644968

            QUESTION

            How to control social share image and text when sharing shiny apps on social media (deployed with shinyapps.io)
            Asked 2019-Apr-04 at 10:41

            When I share my shinyapp on social media, code is displayed under the header image which doesn't look very good. How do I control the social share image and text for my shinyapp? I've deployed the app using shinyapps.io.

            See image below as example:

            Link to app: https://regionalinnovationdatalab.shinyapps.io/Dashboard/

            Link to Gitlab where App code and data can be found: https://gitlab.com/r.chappell/2019_ElectionApp_RIDL

            Code from UI where header image is inserted:

            ...

            ANSWER

            Answered 2019-Apr-04 at 10:41

            After seeking help from R studio support, they said there was an issue with java script code when published to shinyapps.io which is why it didn't work.

            I ended up finding a work around.

            I changed this code:

            Source https://stackoverflow.com/questions/55348816

            QUESTION

            I'm a bit stuck with Dynamic Virtual Channels in Delphi
            Asked 2019-Apr-02 at 10:17

            I'm in the process of converting code for Dynamic Virtual Channels in RDS (aka Terminal services) from C++ to Delphi based on the https://github.com/earthquake/UniversalDVC/tree/master/UDVC-Plugin in C++. This requires several classes for registering the client plugin and I found definitions in tsvirtualchannels.h and .idl on my Windows system at C:\Program Files (x86)\Windows Kits\10\Include\10.0.17763.0\um .

            Unfortunately the .idl file does not contain a type library, so I couldn't build a type library .tlb file and import that so I've been recreating it manually in Delphi.

            Embarcadero use a .ridl file, similar but not quite the same as idl files. One of the attributes in the .idl file is cpp_quote("string"). For example

            ...

            ANSWER

            Answered 2019-Apr-02 at 10:17

            midl.exe (from Windows SDK) should be able to generate a tlb from the idl file.

            If there's no library definition in the idl file, you can add one yourself, as explained by Hans Passant in his answer to this question.

            Source https://stackoverflow.com/questions/55416973

            QUESTION

            Passing strings in / out of Delphi COM Server
            Asked 2019-Feb-01 at 21:58

            I've tried several variations of the code below with returning HRESULT (which is the preferable COM standard) or returning BSTR. I've tried other datatypes as well. I usually get a "missing implementation of interface method" compile error, but when I used a return type of WideString, there was a runtime AccessViolationException on the result:=RetVal; instruction.

            I'm using C# on the client side:

            var msg = delphi.GetMessage("My Message");

            Here is mi RIDL:

            HRESULT _stdcall GetMessage([in] BSTR msg, [out, retval] BSTR* RetVal);

            Here is my implementation:

            ...

            ANSWER

            Answered 2019-Feb-01 at 21:58

            Your RIDL declaration is correct.

            You did not show the C# declaration of the method, so we can't see if you are marshaling parameters correctly or not.

            On the Delphi side, your implementation is missing the stdcall calling convention (to match the RIDL declaration), as well as exception handling so you can return a proper HRESULT on failures:

            Source https://stackoverflow.com/questions/54486135

            QUESTION

            player's Rect is not aligned with the player/character
            Asked 2018-Jun-07 at 06:20

            I got the bullets to shoot but the player rect is not aligned with the player itself, so the bullets doesn't come from the player but rather from the rect that is offset.

            The 3 main Classes:

            (bullet, camera and player)

            ...

            ANSWER

            Answered 2018-Jun-07 at 06:20

            The problem with your code is that neither your red rectangle nor the bullets are drawn to the screen in relation to the camera.

            The Bullet class should subclass Sprite, too, so you can add them to the all_sprite-group, like you do with the obstacles and the player.

            Then let the Camera-class handle the drawing of the bullets.

            As for the red rectangle, I suggest removing the RelRect function and move it into the Camera class itself, like this:

            Source https://stackoverflow.com/questions/50723067

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install ridl

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/vusec/ridl.git

          • CLI

            gh repo clone vusec/ridl

          • sshUrl

            git@github.com:vusec/ridl.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link