bliki | A small blog wiki engine built on Sinatra Stone

 by   bomberstudios Ruby Version: Current License: No License

kandi X-RAY | bliki Summary

kandi X-RAY | bliki Summary

bliki is a Ruby library. bliki has no bugs, it has no vulnerabilities and it has low support. You can download it from GitHub.

bliki is a blog + wiki engine. it is not finished yet, so use it at your own risk. I'm currently using it on without (too much) problems, but as I coded it and know how it works that's hardly an objective benchmark. If you need to write content in Markdown, store it using Stone, and you think your host will be happy sending heavily cached content quite faster than Rails, then bliki might be worth looking at... Comments are handled by because a) I'm too bad a programmer to code a decent comment system, b) the habtm system in Stone sucks big time and c) Disqus already did it. Themes are stored on the 'themes' folder. There's a sample theme (called 'default', in a wicked display of imagination) you can copy to a new folder and tweak to your heart's content. There's a helper to insert Reinvigorate tracking codes, but it's easy to add your own tracking. There's also a (crude) importer for WordPress posts. It is only been tested with the latest (i.e: svn) WordPress version, and its only purpose was to import content from my own blog. If it works for you that would be great, but don't count on it :). I have included a config.ru file you can use for Passenger if you use Dreamhost. Last (but not least), whenever you publish a post Pingomatic will be pinged. Wiki pages do not trigger pings.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              bliki has a low active ecosystem.
              It has 38 star(s) with 4 fork(s). There are 1 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              bliki has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of bliki is current.

            kandi-Quality Quality

              bliki has 0 bugs and 0 code smells.

            kandi-Security Security

              bliki has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              bliki code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              bliki does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              bliki releases are not available. You will need to build from source code and install.
              Installation instructions are available. Examples and code snippets are not available.
              bliki saves you 340 person hours of effort in developing the same functionality from scratch.
              It has 814 lines of code, 47 functions and 26 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed bliki and discovered the below as its top functions. This is intended to give you an instant insight into bliki implemented functionality, and help decide if they suit your requirements.
            • Gets the authorization for the request .
            • Authenticate the authentication method .
            • Sends a request .
            • Checks if the request is authorized
            • decode credentials
            • Create a request using authentication .
            Get all kandi verified functions for this library.

            bliki Key Features

            No Key Features are available at this moment for bliki.

            bliki Examples and Code Snippets

            No Code Snippets are available at this moment for bliki.

            Community Discussions

            QUESTION

            Cache and update regularly complex data
            Asked 2021-Mar-05 at 06:49

            Lets star with background. I have an api endpoint that I have to query every 15 minutes and that returns complex data. Unfortunately this endpoint does not provide information of what exactly changed. So it requires me to compare the data that I have in db and compare everything and than execute update, add or delete. This is pretty boring...

            I came to and idea that I can simply remove all data from certain tables and build everything from scratch... But it I have to also return this cached data to my clients. So there might be a situation that the db will be empty during some request from my client because it will be "refreshing/rebulding". And that cant happen because I have to return something

            So I cam to and idea to

            1. Lock the certain db tables so that the client will have to wait for the "refreshing the db"

            or

            1. CQRS https://martinfowler.com/bliki/CQRS.html

            Do you have any suggestions how to solve the problem?

            ...

            ANSWER

            Answered 2021-Mar-05 at 06:49

            It sounds like you're using a relational database, so I'll try to outline a solution using database terms. The idea, however, is more general than that. In general, it's similar to Blue-Green deployment.

            Have two data tables (or two databases, for that matter); one is active, and one is inactive.

            When the software starts the update process, it can wipe the inactive table and write new data into it. During this process, the system keeps serving data from the active table.

            Once the data update is entirely done, the system can begin to serve data from the previously inactive table. In other words, the inactive table becomes the active table, and vice versa.

            Source https://stackoverflow.com/questions/66462984

            QUESTION

            How to share arguments between two Mutation classes
            Asked 2021-Feb-06 at 03:38

            In my Rails API project using graphql-ruby, I have two Mutations that stands for two slightly different flows. They share a lot of arguments and I would like to keep it DRY, avoiding repeating the similar arguments, so when we change them, we don't accidentally forget to change the other.

            I was wondering how I could solve this problem without falling into the boolean/flag trap resulting in a tangled implementation.

            I could use concepts of inheritance or composition, but how to do it ?

            ...

            ANSWER

            Answered 2021-Feb-06 at 03:38

            A good way I found is using mixins.

            With the help of SuperModule or ActiveSupport::Concern, I think it yields a pretty clean code.

            app/graphql/mutations/create_thing.rb

            Source https://stackoverflow.com/questions/66073345

            QUESTION

            nested looping within Xquery results in a mismatch? and idioms
            Asked 2020-Dec-13 at 13:50

            I'm playing with the bookstore XML from w3schools:

            ...

            ANSWER

            Answered 2020-Dec-13 at 13:50

            I'm not sure what your question is, to be honest, but your XQuery can be simplified quite a bit

            Source https://stackoverflow.com/questions/65275437

            QUESTION

            Exceptions from Data Lake immutability rule
            Asked 2020-Mar-25 at 20:24

            Data Lake should be immutable:

            It is important that all data put in the lake should have a clear provenance in place and time. Every data item should have a clear trace to what system it came from and when the data was produced. The data lake thus contains a historical record. This might come from feeding Domain Events into the lake, a natural fit with Event Sourced systems. But it could also come from systems doing a regular dump of current state into the lake - an approach that's valuable when the source system doesn't have any temporal capabilities but you want a temporal analysis of its data. A consequence of this is that data put into the lake is immutable, an observation once stated cannot be removed (although it may be refuted later), you should also expect ContradictoryObservations.

            Are there any expceptions from rule, where it may be considered a good practice to overwrite data in Data Lake? I suppose no, but some team mates have different understanding.

            I think that data provenance and tracebility is needed in case of cummulative algorithm, to be able to reproduce the final state. What if final state isn't dependent on previous results? Is somebody right if he says that Data Lake immutability (event sourcing) in Data Lake are needed only for cummulative algorithms?

            For example, you have a full-load daily-basis ingestion of tables A and B, afterwards calculate table C. If user is interested only in the latest result of C, are there any reasons to keep history (event sourcing based on date partitioning) of A, B and C?

            Another concern may be an ACID compliance - you may have your file corrupted or partially written. But suppose we're discussing the case when the latest state of A and B can be easily restored from source systems.

            ...

            ANSWER

            Answered 2020-Mar-25 at 20:24

            Are there any expceptions from rule, where it may be considered a good practice to overwrite data in Data Lake?

            The good practice is not overwrite data in the data lake. In case some event was generated with error or bug. New events that compensates the previous one should be produced. That way, the Datalake records all the events history, including compensatory events and eventual reprocessings.

            I think that data provenance and tracebility is needed in case of cummulative algorithm, to be able to reproduce the final state. What if final state isn't dependent on previous results? Is somebody right if he says that Data Lake immutability (event sourcing) in Data Lake are needed only for cummulative algorithms?

            The DataLake is the final destiny for all relevant events. Not all events need to be recorded in the Data Lake. Usually, we distinguish between operational/communication and business events. The business events recorded in the DataLake can be used for reprocessing or in new features that depend on the event's history. Isolated events that do not depend on the event's history can also be produced and added to the history. Consequently, we can deduce that the final state does not violate the principle of immutability. For a set of immutable events contiguous in time, we can always produce a final state. So, the answer is not only for cumulative algorithms.

            For example, you have a full-load daily-basis ingestion of tables A and B, afterwards calculate table C. If user is interested only in the latest result of C, are there any reasons to keep history (event sourcing based on date partitioning) of A, B and C?

            The starting event for an event history cannot be reproduced. Only after the first event, we can think about the final state. In this particular case, the A and B tuples and aggregations should not be considered events. But the calculation function input. The calculation function input should be recorded in the data lake as a business event. The event X (calculation input) at the end produces the event Y. In case the event X doesn't be recorded in the event's history, Y it should be considered the starting event.

            Source https://stackoverflow.com/questions/60726595

            QUESTION

            What is the use of DSL in Koin?
            Asked 2019-Dec-02 at 19:53

            In dagger 2, we had code generation at compile time. According to the Koin library website, Koin doesn't do any code generation and is a "DSL, a lightweight container and a pragmatic API".

            After reading Martin Fowlers Blog regarding DSL, It appears to me that DSL can be generated into code, or interpreted at run-time.

            From Martins article:

            DSLs can be implemented either by interpretation or code generation. Interpretation (reading in the DSL script and executing it at run time) is usually easiest, but code-generation is sometimes essential. Usually the generated code is itself a high level language, such as Java or C.

            But if Koin doesn't generate any code, Is Koin interpreted at runtime? Does it mean Koin comes with some kind of parser? As I have seen, there is no parser, so does it mean that Kotlin itself is the parser?

            Thank you

            ...

            ANSWER

            Answered 2019-Dec-02 at 18:58

            There are internal and external DSLs. To quote Fowler:

            DSLs come in two main forms: external and internal. An external DSL is a language that's parsed independently of the host general purpose language: good examples include regular expressions and CSS. External DSLs have a strong tradition in the Unix community. Internal DSLs are a particular form of API in a host general purpose language, often referred to as a fluent interface. The way mocking libraries, such as JMock, define expectations for tests are good examples of this, as are many of the mechanisms used by Ruby on Rails. Internal DSLs also have a long tradition of usage, particularly in the Lisp community.

            It is external DSLs which (sometimes, not always) involve code generation; Koin is an internal DSL, which doesn't.

            Following Martin Fowler's blog https://martinfowler.com/bliki/DomainSpecificLanguage.html He says that DSL's can either be converted to code(code generation). Or it can be interpreted at run time.

            These are two options for external DSLs.

            If I understand you right, it means Koin DSL is simply Kotlin

            Yes.

            written differently, interpreted at runtime.

            No. It's just Kotlin, compiled using the Kotlin compiler. There's no major distinction between internal DSLs and libraries; if it makes your code readable enough, it can be considered a DSL.

            Source https://stackoverflow.com/questions/59141999

            QUESTION

            Ensuring that all messages have been read from Kafka topic using REST Proxy
            Asked 2019-Oct-04 at 13:41

            I'm new to Kafka, and our team is investigating patterns for inter-service communication.

            The goal

            We have two services, P (Producer) and C (Consumer). P is the source of truth for a set of data that C needs. When C starts up it needs to load all of the current data from P into its cache, and then subscribe to change notifications. (In other words, we want to synchronize data between the services.)

            The total amount of data is relatively low, and changes are infrequent. A brief delay in synchronization is acceptable (eventual consistency).

            We want to decouple the services so that P and C do not need to know about each other.

            The proposal

            When P starts up, it publishes all of its data to a Kafka topic that has log compaction enabled. Each message is an aggregate with a key of its ID.

            When C starts up, it reads all of the messages from the beginning of the topic and populates its cache. It then keeps reading from its offset to be notified of updates.

            When P updates its data, it publishes a message for the aggregate that changed. (This message has the same schema as the original messages.)

            When C receives a new message, it updates the corresponding data in its cache.

            Constraints

            We are using the Confluent REST Proxy to communicate with Kafka.

            The issue

            When C starts up, how does it know when it's read all of the messages from the topic so that it can safely start processing?

            It's acceptable if C does not immediately notice a message that P sent a second ago. It's not acceptable if C starts processing before consuming a message that P sent an hour ago. Note that we don't know when updates to P's data will occur.

            We do not want C to have to wait for the REST Proxy's poll interval after consuming each message.

            ...

            ANSWER

            Answered 2019-Jul-30 at 14:14

            If you would like to find the end partitions of a consumer group, in order to know when you've gotten all data at a point in time, you can use

            Source https://stackoverflow.com/questions/57222357

            QUESTION

            Mysql SQL query DATEDIFF failed in H2 where mode was MYSQL
            Asked 2019-Jun-04 at 08:37

            Background: In one of my projects I am doing component testing on Spring Batch using JUnit. Here application DB is MYSQL. In Junit test execution I let the data-source switch between

            • MYSQL and
            • H2(jdbc:h2:mem:MYTESTDB;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE;MODE=MYSQL)

            based on configuration. Use MYSQL as the data source for debugging purpose and H2 to run the test in isolation in build servers.

            Everything works fine until in application logic I had to use a query with DATEDIFF.

            Issue: Query fails with

            org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement

            Reason: Even through H2 run on MySQL mode it uses H2 Functions and those functions are different

            ...

            ANSWER

            Answered 2018-Mar-01 at 08:58

            Got a workaround with Reflection for the problem. Access H2 Functions map and remove DATEDIFF from there. Then add the replacement function.

            Source https://stackoverflow.com/questions/49041723

            QUESTION

            Is it ok to place test double implementations on the main java code
            Asked 2019-Apr-17 at 17:32

            With regards to test doubles is it ok to place them on the main code so that they can be used, when testing code that contains the original library jar as a dependency?

            There is code that assists me on unit testing (test doubles) on a library I implemented. Is it ok to keep this code in the jar so that it can be used by code that uses the library as a dependency.

            ...

            ANSWER

            Answered 2019-Apr-17 at 17:32

            The short answer is no.

            As you are referring to Java I may point to Maven dependency sopes and Gradle configurations. They separate dependencies between your main code and those purely for test purposes. And they exist for a reason. You may use production ready libraries for your test, but you must not deploy test doubles to your production system.

            I trust your test doubles comply with your style guide and are tested themselves. But what about security requirements? Are you really sure you do not introduce vulnerabilities into your system with theses test doubles? Do you really want to consider these unnecessary parts of your system while planning your penetration test? Or worse, do you want to explain your pen test specialist to consider them?

            If you developed reusable test components you did something useful. But you have to isolate them in separate libraries. They are part of a special purpose test framework. Treat them as you would treat JUnit.

            Source https://stackoverflow.com/questions/55729476

            QUESTION

            Authentication using both Session cookies and/or JWT Bearer Token in .Net Framework Legacy Appliction
            Asked 2019-Feb-28 at 19:42

            I am currently in the process of migrating an existing ASP.NET MVC Monolithic application to .NET Core using microservices. Part of this migration now involves using Identity Server 4 with an external identity provider. The migration is using the strangler pattern in that we are slowly migrating out the old system, rather than doing the entire migration at once. Therefore we must support the legacy system that uses cookie based session authentication as well as the new microservice apis that use JWT bearer tokens.

            Currently I am exploring the simplest method to support both types of authentication until the migration is 100% complete. The web app is fine as it still uses razor pages and server side authentication, therefore can easily establish a session, as well as pass back my token for api authentication. My mobile client is the problem, it is built using ionic and previously used server side session authentication (before my time..). Now that I am using the OpenID Connect protocol with an external ISP all my authentication flow is handled on the client for mobile, therefore a session is not being created.

            Option 1: Using strictly my JWT bearer token for authentication on my mobile application and somehow configure my MVC Controllers to be able to use JWT authentication if it is present in the header or Cookie authentication if there is a cookie present in the request (legacy web app). I am not sure if I would need to create custom middleware to accomplish this or if there is a way to configure something in the startup without having to create middleware and the controllers [authorize] attribute will simply know to use cookie or bearer auth. I found an article explaining that this is possible on Core but have yet to find anything related to .NET framework.

            ...

            ANSWER

            Answered 2019-Feb-28 at 19:42

            I did something similar recently where I had API Controllers and MVC Controllers in the same project. The API controllers authenticated with JWT and the MVC Controllers with Cookie Authentication.

            This may not translate directly to your issue but hopefully it helps...

            The Startup.Auth is set up similar to what you have in option 1.

            In your configuration classes you can use filters to select the type of authentication to use:

            Source https://stackoverflow.com/questions/54931550

            QUESTION

            What is the Kotlin philosophy of test doubles?
            Asked 2018-Oct-28 at 09:01

            What is the Kotlin philosophy of test doubles? With all classes and functions being sealed by default (not open), mocking frameworks like Mockito are clearly not first-class citizens.

            A lot has already been written about the problem, and also about possible solutions, but what do the designers of Kotlin have in mind for testing with doubles? Is it fakes and stubs, or should you role your own spies and mocks?

            ...

            ANSWER

            Answered 2017-May-26 at 07:47

            A great part of Kotlin's design is with Joshua Bloch's Effective Java in mind, meaning closed by default, immutability, and so on.

            Therefore, these problems exist in Java as well. The solution to these problems was and is to use interfaces where possible, so you can provide test doubles, or any other implementation to your production code for that matter.

            For tests, these might be mocks, or fakes, or stubs, or what have you. Bear in mind that a well-written in-memory repository for example is way easier to handle in tests than having to set up a mock for that class.

            Source https://stackoverflow.com/questions/44196144

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install bliki

            clone the repo
            run 'rake install'
            copy 'config.sample.yml' to 'config.yml' and edit it to suit your needs (or run 'rake configure')
            run 'ruby bliki.rb' and cross fingers

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/bomberstudios/bliki.git

          • CLI

            gh repo clone bomberstudios/bliki

          • sshUrl

            git@github.com:bomberstudios/bliki.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Consider Popular Ruby Libraries

            rails

            by rails

            jekyll

            by jekyll

            discourse

            by discourse

            fastlane

            by fastlane

            huginn

            by huginn

            Try Top Libraries by bomberstudios

            sketch-commands

            by bomberstudiosJavaScript

            sketch-framer

            by bomberstudiosJavaScript

            artboard-manager

            by bomberstudiosJavaScript

            fireworks

            by bomberstudiosJavaScript

            sinatra_wiki

            by bomberstudiosRuby