disruptor | High Performance Inter-Thread Messaging Library | Architecture library
kandi X-RAY | disruptor Summary
Support
Quality
Security
License
Reuse
- Processes multiple events .
- Poll for the next event handler .
- Removes the sequence from the holder field .
- Start the actor .
- Helper method to apply a wait method to the barrier .
- Translates two arguments .
- Compares this event with the given value .
- Reads the next value from the event polled .
- Gets the last sequence in the chain .
- Waits for the next sequence to be available for the given sequence .
disruptor Key Features
disruptor Examples and Code Snippets
Trending Discussions on disruptor
Trending Discussions on disruptor
QUESTION
If this already has an answer, I haven't managed to find it. I have spent many hours getting this far, before throwing in the towel and asking here! When it comes to Maven, I would describe myself as a 'Sunday driver'.
Plugin versions: compiler=3.9.0; resurce and dependencies=3.2.0; jar=3.2.2; assembly=3.3.0.
I have two Maven projects, let's call then AppA and Proj1. Proj1 contains all of the 'working' code and 3rd party jar dependencies.
AppA contains the Main class and the app's folders such as 'conf' and 'logs'. Both projects have 'jar' packaging.
AppA's pom has the plugins required to create the jar file with a manifest that defines all of the required jar files in its classpath as 'lib/xxx.jar'. It also has 'Proj1' as a dependency.
The problem I have is that Maven is assembling the zip file before copying all of the dependent jars to the 'lib' folder. Which means that the 'lib' folder is missing from the zip file.
If I build AppA from a single project, the zip file is assembled after the 'lib' folder has been populated,
Can anyone advise me whatI need to do to persuade Maven to copy the dependent jar files to 'lib' before assembling the zip file?
The reason that I have this structure is so that I can create AppB + Proj1 in the future.
Also, the lib file contains all of the Maven plugin jars and their dependencies. When I buils from a single project, they are excluded.
[pom.xml]
4.0.0
com.w3p.njams
com.w3p.iib.njams.client
Beta-1.0.1.0
jar
nJAMS Client App for IIB
nJAMS Client App for IIB
1.8
3.9.0
Beta-1.0.1.0
njamsIIBClient
com.w3p.api.iib10
Beta-1.0.1.0
3.2.0
3.2.0
1.0.0
2.17.1
3.4.4
org.apache.logging.log4j
log4j-core
${log4j.version}
org.apache.logging.log4j
log4j-slf4j-impl
${log4j.version}
test
com.lmax
disruptor
${disruptor.version}
com.w3p.njams
com.w3p.njams.client
${njams.client.version}
com.w3p.njams
${ibm.api.artifact}
${ibm.api.version}
org.apache.maven.plugins
maven-dependency-plugin
${dependency.version}
org.apache.maven.plugins
maven-resources-plugin
3.2.0
org.apache.maven.plugins
maven-assembly-plugin
3.3.0
org.apache.maven.plugins
maven-source-plugin
3.2.1
org.apache.maven.plugins
maven-resources-plugin
${resources.plugin.version}
copy-resources
validate
copy-resources
true
${project.build.directory}/${client.build.dir}_${project.version}
cache
pmd
${project.build.directory}/${client.build.dir}_${project.version}/conf
conf
true
log4j2-test.xml
njams*.xml
${project.build.directory}/${client.build.dir}_${project.version}/flowToProcessModelCache
flowToProcessModelCache
true
dummy.cache
${project.build.directory}/${client.build.dir}_${project.version}/certs
certs
true
dummy.cert
*-endpoint
*-instanceId
*.key
*.pem
${project.build.directory}/${client.build.dir}_${project.version}/logs
logs
true
njams*.log
${project.build.directory}/${client.build.dir}_${project.version}/images
images
false
${project.build.directory}/${client.build.dir}_${project.version}/jms
jms
true
JNDI_Local/*.bindings
JNDI_Remote/*.bindings
${project.build.directory}/${client.build.dir}_${project.version}/monitoringProfiles
monitoringProfiles
true
dummyProfile.xml
*.xsd
${project.build.directory}/${client.build.dir}_${project.version}/processModels
processModels
true
dummy.pmd
${project.build.directory}/${client.build.dir}_${project.version}/scripts
scripts
false
org.apache.maven.plugins
maven-compiler-plugin
${maven.compiler.version}
${jdk.version}
${jdk.version}
org.apache.maven.plugins
maven-dependency-plugin
${dependency.version}
copy-dependencies
install
copy-dependencies
${project.build.directory}/${client.build.dir}_${project.version}/lib
false
false
true
compile
org.apache.maven.plugins
maven-jar-plugin
3.2.2
${project.build.directory}/${client.build.dir}_${project.version}
${client.build.dir}-${project.version}
**/*.properties
**/*.xml
true
lib/
com.w3p.im.iib.mon.client.IIBMonitoringClient
. resources
org.apache.maven.plugins
maven-assembly-plugin
3.3.0
create-archive
package
single
false
src/main/assembly/zip.xml
${project.basedir}
[zip.xml]
zip
/
false
zip
${project.build.directory}/${client.build.dir}_${project.version}
${client.build.dir}_${project.version}
ANSWER
Answered 2022-Jan-24 at 16:12It happens because the maven-assembly-plugin
executes on a prior phase (package
) than the the maven-dependency-plugin
phase (install
). Try to set up the execution of the plugins so it will act as you expect.
I would also suggest a different approach which I think can simplify you build configuration - use a multi-module pom which will aggregate both project. Than on the concrete pom.xml
of AppA
use Proj1
as a dependency. It will saves you from copying around files and repackage.
QUESTION
I’m currently stuck on an issue that’s happening with our sticky nav.
When a user scrolls down the screen very slowly our second navigation which is a sticky nav, flickers for some reason. I don’t know what it could be.
I’ve tried adding “-webkit-transform: translateZ(0);” to the “.affix” and ".affix-top" classes with no luck.
This issue is only happening on Chrome and Edge. Firefox, IE11 and Safari this issue does not occur thankfully.
What's causing this? How can/if this be resolved?
Here’s the JS to the sticky nav:
$( document ).ready(function(){
$('.full-width-anchorLinks').parent().addClass('full-browser-width-wrap');
if( $(".sticky").length ) {
var $navbar = $(".sticky");
var scrollTop = $('body').scrollTop(),
elementOffset = $navbar.offset().top,
distance = (elementOffset - scrollTop),
anchor = Math.round(distance);
$navbar.affix({offset: {top: anchor} });
var scrollSpyOffsetTotal = 0;
// Header Height: onLoad (Use mainly for when scrollTop is 0)
if ($(".globalHeaderV2, .consumerHeaderV2").length) {
var headerHeight = $('.globalHeaderV2, .consumerHeaderV2').height();
scrollSpyOffsetTotal += headerHeight;
}
// Nav Container Height: on page scroll
if ($(".navbar-main-fixed, .nav-container.sticky").length) {
var navContainer = $('.navbar-main-fixed, .nav-container.sticky').height();
scrollSpyOffsetTotal += navContainer;
}
if ($(".affix-top, .affix").length) {
var affixHeight = $('.affix-top, .affix').height();
scrollSpyOffsetTotal += affixHeight;
}
if ($(".breadcrumb").length) {
var breadcrumbHeight = $('.breadcrumb').height();
scrollSpyOffsetTotal += breadcrumbHeight;
}
//If sticky breadcrumbs exist and tablet/desktop view point,
$('body').addClass('scroll-main').scrollspy({target: '.navbar', offset: scrollSpyOffsetTotal});
//On scroll change the top position of '.affix' based on sticky main nav and sticky breadcrumbs
$(window).scroll(function() {
var totalOffset = 0;
// Old consumer (forhome)/Business Header
if ($(".navbar-main-fixed").length) {
var navHeight = $(".navbar-main-fixed").height();
totalOffset += navHeight;
}
// Consumer Header
if ($(".nav-container.sticky").length) {
var consumerNavHeight = $(".nav-container.sticky").height();
totalOffset += consumerNavHeight;
}
if ($(".breadcrumb-fixed").length) {
var breadcrumbHeight = $(".breadcrumb-fixed").height();
totalOffset += breadcrumbHeight;
}
if ($(".sticky-sem-header").length) {
var semHeaderHeight = $(".sticky-sem-header").height();
totalOffset += semHeaderHeight;
}
// Desktop
if ($(window).width() >= 1024) {
$(".affix").css("top",totalOffset+"px");
} else if ($(window).width() < 1024) {
var mobileNavHeight = $('.navbar-main-fixed, .nav-container.sticky').height();
$(".affix").css("top",mobileNavHeight+"px");
}
});
}
// Add smooth scrolling on all links inside the navbar
$(".anchorLinks a").on('click', function(event) {
// Make sure this.hash has a value before overriding default behavior
if (this.hash !== "") {
// Prevent default anchor click behavior
event.preventDefault();
// Init destination var
var dest = 0;
var hash = this.hash;
var scrollTop = window.pageYOffset || document.documentElement.scrollTop;
var headerHeight = 0;
var navContainer = 0;
var breadcrumbHeight = 0;
// Header Height: onLoad (Use mainly for when scrollTop is 0)
if ($(".globalHeaderV2, .consumerHeaderV2").length) {
headerHeight = $('.globalHeaderV2, .consumerHeaderV2').height();
}
// Nav Container Height
if ($(".navbar-main-fixed, .nav-container.sticky, .nav-sticky-wrapper").length) {
navContainer = $('.navbar-main-fixed, .nav-container.sticky, .nav-sticky-wrapper').height();
}
// Affix (sticky nav)
var affixHeight = $('.affix, .affix-top').height();
if (scrollTop === 0) {
// Desktop
if ($(window).width() >= 1024) {
if ($(".breadcrumb").length && ($(".breadcrumb ul.crumbs").css("display") != "none")) {
breadcrumbHeight = $('.breadcrumb').height();
// Exists When "supercrumb" is added
if ($(".breadcrumb .crumbs.supercrumb").length && ($(".breadcrumb ul.crumbs").css("display") != "none")) {
// Check if disruptor exists
if ($(".disruptorPanel").length && ($(".disruptorPanel").css("display") != "none")) {
disruptorPanel = $('.disruptorPanel').height();
dest = $(hash).offset().top - (disruptorPanel + headerHeight + breadcrumbHeight);
} else {
// W fixed breadcrumb
dest = $(hash).offset().top - (headerHeight + breadcrumbHeight + 106 + affixHeight);
}
} else {
// W/O Fixed breadcrumb
dest = $(hash).offset().top - (navContainer + breadcrumbHeight + 50 + affixHeight);
}
} else {
dest = $(hash).offset().top - (headerHeight + affixHeight - 10);
}
// Mobile
} else if ($(window).width() < 1024) {
// Mobile Nav Container Height
// Check Business Site/ Old forHome / Research Site for disruptor
if ($(".disruptorPanel").length && $(".bottom-bar").length) {
navContainer = $('.bottom-bar').height();
disruptorPanel = $('.disruptorPanel').height();
dest = $(hash).offset().top - (headerHeight + navContainer + affixHeight - disruptorPanel);
} else if ($(".bottom-bar").length) {
dest = $(hash).offset().top - (headerHeight + navContainer + affixHeight);
}
// New consumer
if ($(".nav-container").length) {
dest = $(hash).offset().top - (headerHeight);
}
}
} else {
if ($(".breadcrumb-fixed").length && ($(".breadcrumb ul.crumbs").css("display") != "none")) {
breadcrumbHeight = $('.breadcrumb-fixed').height();
dest = $(hash).offset().top - (navContainer + breadcrumbHeight + affixHeight);
// Remove breadcrumb height, since breadcrumb does not show on tablet wide and down
if ($(window).width() < 1024) {
dest = $(hash).offset().top - (navContainer + affixHeight);
}
} else {
dest = $(hash).offset().top - (navContainer + affixHeight);
}
}
setTimeout(function () {
window.location.hash = hash;
}, 300);
// Scroll to destination - Using jQuery's animate() method to add smooth page scroll
// The optional number (800) specifies the number of milliseconds it takes to scroll to the specified area
$('html, body').animate({scrollTop: dest}, 800);
} // End if
});
});
Any help is gladly appreciated.
Thank you!
ANSWER
Answered 2022-Jan-22 at 17:08In order to make it works, please make the next things:
- Add position sticky (and other styles) to this element:
2A. Remove the code that toggle between .affix and .affix-top
OR:
2B 1. If you can't do step 2A, you can add this height instead (in order to make affix and affix-top to be with the same height):
2B 2. Remove position: fixed from affix and position static from affix-top (they don't need positions cause we set position to their parent)
In addition, I don't know if it's third party code or not but please try to not use !important property. It's hard to set style for those elements.
QUESTION
I tried to implement Lmax in python .I tried to handle data in 4 processes
import disruptor
import multiprocessing
import random
if __name__ == '__main__':
cb = disruptor.CircularBuffer(5)
def receiveWriter():
while(True):
n = random.randint(5,20)
cb.receive(n)
def ReplicatorReader():
while(True):
cb.replicator()
def journalerReader():
while(True):
cb.journaler()
def unmarshallerReader():
while(True):
cb.unmarshaller()
def consumeReader():
while(True):
print(cb.consume())
p1 = multiprocessing.Process(name="p1",target=ReplicatorReader)
p1.start()
p0 = multiprocessing.Process(name="p0",target=receiveWriter)
p0.start()
p1 = multiprocessing.Process(name="p1",target=ReplicatorReader)
p1.start()
p2 = multiprocessing.Process(name="p2",target=journalerReader)
p2.start()
p3 = multiprocessing.Process(name="p3",target=unmarshallerReader)
p3.start()
p4 = multiprocessing.Process(name="p4",target=consumeReader)
p4.start()
but I get this Error in my code :
Traceback (most recent call last):
File "", line 1, in
File "", line 1, in
File "C:\Program Files\Python39\lib\multiprocessing\spawn.py", line 116, in spawn_main
File "C:\Program Files\Python39\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
exitcode = _main(fd, parent_sentinel)
File "C:\Program Files\Python39\lib\multiprocessing\spawn.py", line 126, in _main
File "C:\Program Files\Python39\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
self = reduction.pickle.load(from_parent)
AttributeError: Can't get attribute 'unmarshallerReader' on
AttributeError: Can't get attribute 'consumeReader' on
ANSWER
Answered 2021-Oct-11 at 13:06Your first problem is that the target of a Process
call cannot be within the if __name__ == '__main__':
block. But:
As I mentioned in an earlier post of yours, the only way I see that you can share an instance of CircularBuffer
across multiple processess is to implement a managed class, which surprisingly is not all that difficult to do. But when you create a managed class and create an instance of that class, what you have is actually a proxy reference to the object. This has two implications:
- Each method call is more like a remote procedure call to a special server process created by the manager you will start up and therefore has more overhead than a local method call.
- If you print the reference, the class's
__str__
method will not be called; you will be printing a representation of the proxy pointer. You should probably rename method__str__
to something likedump
and call that explicitly whenever you want a representation of the instance.
You should also explicitly wait for the completion of the processes you are creating so that the manager service does not shutdown prematurely, which means that each process should be assigned to a unique variable and have a unique name.
import disruptor
import multiprocessing
from multiprocessing.managers import BaseManager
import random
class CircularBufferManager(BaseManager):
pass
def receiveWriter(cb):
while(True):
n = random.randint(5,20)
cb.receive(n)
def ReplicatorReader(cb):
while(True):
cb.replicator()
def journalerReader(cb):
while(True):
cb.journaler()
def unmarshallerReader(cb):
while(True):
cb.unmarshaller()
def consumeReader(cb):
while(True):
print(cb.consume())
if __name__ == '__main__':
# Create managed class
CircularBufferManager.register('CircularBuffer', disruptor.CircularBuffer)
# create and start manager:
with CircularBufferManager() as manager:
cb = manager.CircularBuffer(5)
p1 = multiprocessing.Process(name="p1", target=ReplicatorReader, args=(cb,))
p1.start()
p0 = multiprocessing.Process(name="p0",target=receiveWriter, args=(cb,))
p0.start()
p1a = multiprocessing.Process(name="p1a",target=ReplicatorReader, args=(cb,))
p1a.start()
p2 = multiprocessing.Process(name="p2",target=journalerReader, args=(cb,))
p2.start()
p3 = multiprocessing.Process(name="p3",target=unmarshallerReader, args=(cb,))
p3.start()
p4 = multiprocessing.Process(name="p4",target=consumeReader, args=(cb,))
p4.start()
p1.join()
p0.join()
p1a.join()
p2.join()
p3.join()
p4.join()
QUESTION
I am trying to project a flat data source into an object that can be serialized directly to JSON using Newtonsoft.Json. I've created a small program in Linqpad with an imagined inventory overview as a test. The requested output is as follows:
- Site name
- Inventory name
- Product name
- Weight
- Units
- Inventory name
- Product (etc)
- Inventory name
For the life of me I can't get it to only have a single "Site name" as the only root object. I want an to list the contents inside an inventory inside a site, but it always ends up looking like:
How can I make the "Site" distinct having a collection of "inventory" which each has a collection of "products"?
My actual data source is a database table and it resembles the structure of my test object - and it is what it is.
The test code in Linqpad: (note that it references Newtonsoft.Json)
void Main()
{
var contents = new List()
{
new DatabaseRecord{Product="Autoblaster", Inventory="Hull 5", Site="Death star", Units=20,Weight=500},
new DatabaseRecord{Product="E11 Blaster Rifle", Inventory="Hull 5", Site="Death star", Units=512,Weight=4096},
new DatabaseRecord{Product="SWE/2 Sonic Rifle", Inventory="Hull 1", Site="Death star", Units=20,Weight=500},
new DatabaseRecord{Product="Relby v10 Micro Grenade Launcher", Inventory="Hull 5", Site="Death star", Units=20,Weight=500},
new DatabaseRecord{Product="T-8 Disruptor", Inventory="Hull 1", Site="Death star", Units=20,Weight=500},
new DatabaseRecord{Product="E11 Blaster Rifle", Inventory="Hull 2", Site="Death star", Units=50,Weight=1200}
};
var inventorycontent = from row in contents
group row by row.Site into sites
orderby sites.Key
select from inventory in sites
group inventory by inventory.Inventory into inventories
orderby inventories.Key
select new
{
site = sites.Key,
inventory = inventories.Key,
lines = inventories.Select(i => new { i.Product, i.Weight, i.Units })
};
contents.Dump();
inventorycontent.Dump();
JsonConvert.SerializeObject(inventorycontent, Newtonsoft.Json.Formatting.Indented).Dump();
}
// Define other methods and classes here
class DatabaseRecord
{
public string Product { get; set; }
public string Inventory { get; set; }
public string Site { get; set; }
public int Units { get; set; }
public double Weight { get; set; }
}
JSON output:
[
[
{
"site": "Death star",
"inventory": "Hull 1",
"lines": [
{
"Product": "SWE/2 Sonic Rifle",
"Weight": 500.0,
"Units": 20
},
{
"Product": "T-8 Disruptor",
"Weight": 500.0,
"Units": 20
}
]
},
{
"site": "Death star",
"inventory": "Hull 2",
"lines": [
{
"Product": "E11 Blaster Rifle",
"Weight": 1200.0,
"Units": 50
}
]
},
{
"site": "Death star",
"inventory": "Hull 5",
"lines": [
{
"Product": "Autoblaster",
"Weight": 500.0,
"Units": 20
},
{
"Product": "E11 Blaster Rifle",
"Weight": 4096.0,
"Units": 512
},
{
"Product": "Relby v10 Micro Grenade Launcher",
"Weight": 500.0,
"Units": 20
}
]
}
]
]
Suggested correct output sample:
{
"sites":[{
"site": "Death star",
"inventories":[
{
"name":"Hull 1",
"lines":[{
"Product": "SWE/2 Sonic Rifle",
"Weight": 500.0,
"Units": 20
},
{
"Product": "T-8 Disruptor",
"Weight": 500.0,
"Units": 20
}]
},
{
"name":"Hull 2",
"lines":[{
"Product": "SWE/2 Sonic Rifle",
"Weight": 500.0,
"Units": 20
}]
}
]
},
{"site": "Other site",
"inventories":[
{
"name":"Hull 1",
"lines":[{
"Product": "SWE/2 Sonic Rifle",
"Weight": 500.0,
"Units": 20
}]
}]
}]
}
ANSWER
Answered 2021-Sep-14 at 15:17Ok I have a solution using dictionaries which will group everything properly:
//first get everything properly grouped with dictionaries
var result = contents
.GroupBy(x => x.Site)
.ToDictionary(g => g.Key, g => g
.GroupBy(i => i.Inventory)
.ToDictionary(i => i.Key, i => i
.Select(a => new
{
Product = a.Product,
Weight = a.Weight,
Units = a.Units
})
.ToList()));
//project to a new object that matches your desired json
var formattedResult = new
{
sites = (from r in result
select new
{
site = r.Key,
inventories = (from i in r.Value select new { name = i.Key, lines = i.Value }).ToList()
}).ToList()
};
This is the output json:
{
"sites": [
{
"site": "Death star",
"inventories": [
{
"name": "Hull 5",
"lines": [
{
"Product": "Autoblaster",
"Weight": 500.0,
"Units": 20
},
{
"Product": "E11 Blaster Rifle",
"Weight": 4096.0,
"Units": 512
},
{
"Product": "Relby v10 Micro Grenade Launcher",
"Weight": 500.0,
"Units": 20
}
]
},
{
"name": "Hull 1",
"lines": [
{
"Product": "SWE/2 Sonic Rifle",
"Weight": 500.0,
"Units": 20
},
{
"Product": "T-8 Disruptor",
"Weight": 500.0,
"Units": 20
}
]
},
{
"name": "Hull 2",
"lines": [
{
"Product": "E11 Blaster Rifle",
"Weight": 1200.0,
"Units": 50
}
]
}
]
}
]
}
As you can see I group by Site then make that into a dictionary which then I group by Inventory and make that into another dictionary with a list of products as values.
So basically the result is a
Dictionary>>
The dictionary keys are Site, Inventory Name.
I use this GroupBy -->> Dictionary every time I have to build hierarchical json like this.
QUESTION
I am trying to analyze and implement mixed sync and async logging. I am using Spring boot application along with disruptor API. My log4j configuration:
[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
>
Demo class 1:
package com.example.logging;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;
@SpringBootApplication
public class DemoApplication2 {
static Logger logger = LogManager.getLogger(DemoApplication2.class);
public static void main(String[] args) {
SpringApplication.run(DemoApplication2.class, args);
long startTime = System.currentTimeMillis();
for(int i = 0; i < 2000; i++) {
logger.debug("Async : " + i);
}
System.out.println("time taken:: " + (System.currentTimeMillis() - startTime));
}
}
Using above code, I am expecting "System.out" should print before the logging of all "debug" statements as I am using async logging for "debug" level. So, few debugger logs would be logged first (e.g. few 100 or 150) then SOP should be printed and then remaining debugger logs should be logged. But, when I run my application, all debugger statements log first and then SOP prints which is not expected result. Furthermore, if I use additivity="false" in the "asynclogger" () then I can see my expected above mentioned result. Now I have 2nd demo class:
package com.example.logging;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;
@SpringBootApplication
public class DemoApplication3 {
static Logger logger = LogManager.getLogger(DemoApplication3.class);
public static void main(String[] args) {
SpringApplication.run(DemoApplication3.class, args);
long startTime = System.currentTimeMillis();
for(int i = 0; i < 2000; i++) {
logger.info("Sync : " + i);
}
System.out.println("time taken:: " + (System.currentTimeMillis() - startTime));
}
}
Now with above class, I am expecting all sync logging first and SOP should be printed after all info logs. But if add "additivity="false" to my configuration then all logs are async only.
Finally, I couldn't able to configure sync and async logging at the same time. Kindly help and suggest.
ANSWER
Answered 2021-Apr-27 at 07:41I'm not really sure what you think you are testing.
When additivity is enabled the log event will be copied and placed into the Disruptor's Ring Buffer where it will be routed to the console appender on a different thread. After placing the copied event in the buffer the event will be passed to the root logger and routed to the Console Appender in the same thread. Since both the async Logger and sync Logger are doing the same thing they are going to take approximately the same time. So I am not really sure why you believe anything will be left around by the time the System.out call is made.
When you only use the async logger the main thread isn't doing anything but placing events in the queue, so it will respond much more quickly and it would be quite likely your System.out message would appear before all log events have been written.
I suspect there is one very important piece of information you are overlooking. When an event is routed to a Logger the level specified on the LoggerConfig the Logger is associated with is checked. When additivity is true the event is not routed to a parent Logger (there isn't one). It is routed to the LoggerConfig's parent LoggerConfig. A LoggerConfig calls isFiltered(event) which ONLY checks Filters that have been configured on the LoggerConfig. So even though you have level="info" on your Root logger, debug events sent to it via the AsyncLogger will still be logged. You would have to add a ThresholdFilter to the RootLogger to prevent that.
QUESTION
Recently, when I write data into elasticsearch with BulkRequest, I got the following exception:
org.elasticsearch.action.ActionRequestValidationException: Validation Failed: 1: id is missing;
at org.elasticsearch.action.bulk.BulkRequest.validate(BulkRequest.java:614)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1731)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1697)
at org.elasticsearch.client.RestHighLevelClient.bulk(RestHighLevelClient.java:473)
at com.clougence.cloudcanal.es6.task.write.full.Es6FullInsertExecutorImpl.insert(Es6FullInsertExecutorImpl.java:70)
at com.clougence.cloudcanal.es6.task.write.full.Es6FullApplyHandler.handle(Es6FullApplyHandler.java:35)
at com.clougence.cloudcanal.es6.task.write.full.Es6FullApplyHandler.handle(Es6FullApplyHandler.java:16)
at com.clougence.cloudcanal.task.applier.full.FullApplyWorkHandler.onEvent(FullApplyWorkHandler.java:61)
at com.clougence.cloudcanal.task.applier.full.FullApplyWorkHandler.onEvent(FullApplyWorkHandler.java:20)
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:143)
at java.lang.Thread.run(Thread.java:748)
My code is as follows. I debug my code and find the esIdValue is blank and this cause the exception.
updateRequest = new UpdateRequest().index(indexDef.getIndexName())
.id(esIdValue)
.type(DEFAULT_TYPE)
.doc(docMap)
.docAsUpsert(true);
bulkRequest.add(updateRequest);
My question is does elasticsearch support write a doc with _id value "". Can I put the empty _id value into es?
ANSWER
Answered 2021-Mar-10 at 05:01ES _id field doesn't support blank char like "".
You have 2 options:
Always provide an id
You just need to remove the id field that you have and elastic will assign an auto-generated one in "_id" field. Something like
updateRequest = new UpdateRequest().index(indexDef.getIndexName())
.type(DEFAULT_TYPE)
.doc(docMap)
.docAsUpsert(true);
bulkRequest.add(updateRequest)
QUESTION
I have added a field to metadata for transferring and persisting in the status index. The field is a List of String and its name is input_keywords. After running topology in the Strom cluster, The topology halted with the following logs:
java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: java.util.ConcurrentModificationException
Serialization trace:
md (com.digitalpebble.stormcrawler.Metadata)
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:74) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.disruptor$consume_loop_STAR_$fn__4132.invoke(disruptor.clj:84) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484) [storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
Caused by: com.esotericsoftware.kryo.KryoException: java.util.ConcurrentModificationException
Serialization trace:
md (com.digitalpebble.stormcrawler.Metadata)
at com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:101) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:534) ~[kryo-3.0.3.jar:?]
at org.apache.storm.serialization.KryoValuesSerializer.serializeInto(KryoValuesSerializer.java:44) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.serialization.KryoTupleSerializer.serialize(KryoTupleSerializer.java:44) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.daemon.worker$mk_transfer_fn$transfer_fn__10378.invoke(worker.clj:203) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.daemon.executor$start_batch_transfer_GT_worker_handler_BANG$fn__10056.invoke(executor.clj:314) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.disruptor$clojure_handler$reify__4115.onEvent(disruptor.clj:41) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:509) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
... 6 more
Caused by: java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437) ~[?:1.8.0_112]
at java.util.HashMap$EntryIterator.next(HashMap.java:1471) ~[?:1.8.0_112]
at java.util.HashMap$EntryIterator.next(HashMap.java:1469) ~[?:1.8.0_112]
at com.esotericsoftware.kryo.serializers.MapSerializer.write(MapSerializer.java:99) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.MapSerializer.write(MapSerializer.java:39) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:552) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:80) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:534) ~[kryo-3.0.3.jar:?]
at org.apache.storm.serialization.KryoValuesSerializer.serializeInto(KryoValuesSerializer.java:44) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.serialization.KryoTupleSerializer.serialize(KryoTupleSerializer.java:44) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.daemon.worker$mk_transfer_fn$transfer_fn__10378.invoke(worker.clj:203) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.daemon.executor$start_batch_transfer_GT_worker_handler_BANG$fn__10056.invoke(executor.clj:314) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.disruptor$clojure_handler$reify__4115.onEvent(disruptor.clj:41) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:509) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
... 6 more
2021-02-27 08:03:34.276 o.a.s.d.executor Thread-20-disruptor-executor[45 45]-send-queue [ERROR]
java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: java.util.ConcurrentModificationException
Serialization trace:
md (com.digitalpebble.stormcrawler.Metadata)
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:74) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.disruptor$consume_loop_STAR_$fn__4132.invoke(disruptor.clj:84) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484) [storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
Caused by: com.esotericsoftware.kryo.KryoException: java.util.ConcurrentModificationException
Serialization trace:
md (com.digitalpebble.stormcrawler.Metadata)
at com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:101) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:534) ~[kryo-3.0.3.jar:?]
at org.apache.storm.serialization.KryoValuesSerializer.serializeInto(KryoValuesSerializer.java:44) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.serialization.KryoTupleSerializer.serialize(KryoTupleSerializer.java:44) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.daemon.worker$mk_transfer_fn$transfer_fn__10378.invoke(worker.clj:203) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.daemon.executor$start_batch_transfer_GT_worker_handler_BANG$fn__10056.invoke(executor.clj:314) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.disruptor$clojure_handler$reify__4115.onEvent(disruptor.clj:41) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:509) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
... 6 more
Caused by: java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437) ~[?:1.8.0_112]
at java.util.HashMap$EntryIterator.next(HashMap.java:1471) ~[?:1.8.0_112]
at java.util.HashMap$EntryIterator.next(HashMap.java:1469) ~[?:1.8.0_112]
at com.esotericsoftware.kryo.serializers.MapSerializer.write(MapSerializer.java:99) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.MapSerializer.write(MapSerializer.java:39) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:552) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:80) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:518) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:628) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:100) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40) ~[kryo-3.0.3.jar:?]
at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:534) ~[kryo-3.0.3.jar:?]
at org.apache.storm.serialization.KryoValuesSerializer.serializeInto(KryoValuesSerializer.java:44) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.serialization.KryoTupleSerializer.serialize(KryoTupleSerializer.java:44) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.daemon.worker$mk_transfer_fn$transfer_fn__10378.invoke(worker.clj:203) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.daemon.executor$start_batch_transfer_GT_worker_handler_BANG$fn__10056.invoke(executor.clj:314) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.disruptor$clojure_handler$reify__4115.onEvent(disruptor.clj:41) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:509) ~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
... 6 more
2021-02-27 08:03:34.327 o.a.s.util Thread-20-disruptor-executor[45 45]-send-queue [ERROR] Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) [storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.7.0.jar:?]
at org.apache.storm.daemon.worker$fn_10827$fn_10828.invoke(worker.clj:781) [storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.daemon.executor$mk_executor_data$fn_10034$fn_10035.invoke(executor.clj:281) [storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:494) [storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
We have different parallelism hints for each components of topology. After adding the input_keywords to metadata we got the error. What is the main reason of the Error?
ANSWER
Answered 2021-Mar-01 at 10:25You are modifying a Metadata instance while it is being serialized. You can't do that, see Storm troubleshooting page.
As explained in the release notes of 1.16, you can lock the metadata. This won't fix the issue but will tell you where in your code you are writing into the metadata.
In our topology, we emitted the same metadata to multiple bolts at the same time.
mystery explained. don't do that.
QUESTION
we're migrating domains and some but not all content. The URL structure is different.
Below is what I have in my .htaccess file. I only added the code at the end starting with "#User added 301 Redirect", the other entries were in .htaccess already.
Expected/Desired: I want anyone who goes to the old main domain to the new main domain, and anyone who attempts to access these specific pages of the old site/domain to go to the mapping in the new site.
Observed: the main domain 301 works olddomain.com now goes to newdomain.com, or if the file name/path is exactly the same. Redirects follow he taxonomy of the old domain, not use my mapping. So, "olddomain.com/about-me" tries to go to "newdomain.com/about-me" instead of the correct mapping "newdomain.com/about" as shown in the .htaccess file and results in a 401 file not found error.
Thoughts? Feel free to respond like I'm five years old.
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
AddHandler x-mapp-php5.5 .php
# BEGIN WordPress
# The directives (lines) between "BEGIN WordPress" and "END WordPress" are
# dynamically generated, and should only be modified via WordPress filters.
# Any changes to the directives between these markers will be overwritten.
RewriteEngine On
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
# END WordPress
# Wordfence WAF
Require all denied
Order deny,allow
Deny from all
# END Wordfence WAF
# User Added 301 Redirect
Redirect 301 / https://newdomain.com/
Redirect 301 /paleo-salmon-cakes https://newdomain.com/blog-entries/paleo-lemon-ginger-salmon-cakes
Redirect 301 /organic-vs-conventional https://newdomain.com/blog-entries/organic-vs-conventional-produce
Redirect 301 /endocrine-disruptors https://newdomain.com.com/blog-entries/what-are-endocrine-disruptors-and-why-you-should-care
Redirect 301 /vegan-paleo-caesar-dressing https://newdomain.com/blog-entries/avocado-cashew-caesar-salad-dressing-vegan-and-paleo
Redirect 301 /about-me https://newdomain.com/about
Redirect 301 /contact https://newdomain.com/contact
Redirect 301 /work-with-me/business-client-services https://newdomain/for-businesses
Redirect 301 /work-with-me/individual-client-services https://newdomain.com/for-individuals
Redirect 301 /work-with-me/pregnant-postpartum https://newdomain.com/for-individuals
Redirect 301 /spinach-banana-muffins https://newdomain.com/blog-entries/spinach-banana-muffins
# User Added 301 Redirect
ANSWER
Answered 2021-Feb-28 at 17:27You could try redirect directives in following order:
# User Added 301 Redirect
#Redirect 301 / https://newdomain.com/
Redirect 301 /paleo-salmon-cakes https://newdomain.com/blog-entries/paleo-lemon-ginger-salmon-cakes
Redirect 301 /organic-vs-conventional https://newdomain.com/blog-entries/organic-vs-conventional-produce
Redirect 301 /endocrine-disruptors https://newdomain.com.com/blog-entries/what-are-endocrine-disruptors-and-why-you-should-care
Redirect 301 /vegan-paleo-caesar-dressing https://newdomain.com/blog-entries/avocado-cashew-caesar-salad-dressing-vegan-and-paleo
Redirect 301 /about-me https://newdomain.com/about
Redirect 301 /contact https://newdomain.com/contact
Redirect 301 /work-with-me/business-client-services https://newdomain/for-businesses
Redirect 301 /work-with-me/individual-client-services https://newdomain.com/for-individuals
Redirect 301 /work-with-me/pregnant-postpartum https://newdomain.com/for-individuals
Redirect 301 /spinach-banana-muffins https://newdomain.com/blog-entries/spinach-banana-muffins
#
# this as the last line:
#
Redirect 301 / https://newdomain.com/
# User Added 301 Redirect
QUESTION
We want to centralize all our java application logs on Graylog server. We use apache tomcat as a container and log4j for the logging framework. log4j2.xml
Loggers
log detail
2021-01-26 20:05:01,343 http-nio-31381-exec-1 DEBUG Reconnecting /graylog.domain.com:12201
2021-01-26 20:05:01,344 http-nio-31381-exec-1 DEBUG Creating socket /graylog.domain.com:12201
2021-01-26 20:05:01,344 http-nio-31381-exec-1 DEBUG Closing SocketOutputStream java.net.SocketOutputStream@8cb01fa
2021-01-26 20:05:01,345 http-nio-31381-exec-1 DEBUG Connection to graylog.domain.com:12201 reestablished: Socket[addr=/graylog.domain.com,port=12201,localport=41482]
As you see my application create a socket connection wiith gray log server. But we did not see any log on the Gray log server
versions
tomcat - 9.0.16.0
jdk - 1.8.0_201-b09(64 bit)
log4j2 - 1.12 / 1.14(both checked)
os - Linux 3.10.0-1062.12.1.el7.x86_64
gray log - Graylog 3.0.2+1686930 on graylogsrv (Oracle Corporation 1.8.0_232 on Linux 3.10.0-1062.9.1.el7.x86_64)
documentation https://logging.apache.org/log4j/2.x/manual/layouts.html#GELFLayout
I want to use log4j2 other than extrnal library like logstash-gelf
UPDATE
this is the gray log server log
2021-01-27T12:18:45.079+04:00 ERROR [DecodingProcessor] Unable to decode raw message RawMessage{id=45f04b90-6078-11eb-80bf-00505696a882, journalOffset=2771397770, codec=gelf, payloadSize=11, timestamp=2021-01-27T08:18:45.065Z, remoteAddress=/graylog.domain:58258} on input <600ecd97f7c4b60478f4504e>.
2021-01-27T12:18:45.079+04:00 ERROR [DecodingProcessor] Error processing message RawMessage{id=45f04b90-6078-11eb-80bf-00505696a882, journalOffset=2771397770, codec=gelf, payloadSize=11, timestamp=2021-01-27T08:18:45.065Z, remoteAddress=/graylog.domain:58258}
com.fasterxml.jackson.core.JsonParseException: Unexpected character ('�' (code 65533 / 0xfffd)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
at [Source: �Y�n�8��h; line: 1, column: 2]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1702) ~[graylog.jar:?]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:558) ~[graylog.jar:?]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:456) ~[graylog.jar:?]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1906) ~[graylog.jar:?]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:749) ~[graylog.jar:?]
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:3850) ~[graylog.jar:?]
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3799) ~[graylog.jar:?]
at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2397) ~[graylog.jar:?]
at org.graylog2.inputs.codecs.GelfCodec.decode(GelfCodec.java:127) ~[graylog.jar:?]
at org.graylog2.shared.buffers.processors.DecodingProcessor.processMessage(DecodingProcessor.java:150) ~[graylog.jar:?]
at org.graylog2.shared.buffers.processors.DecodingProcessor.onEvent(DecodingProcessor.java:91) [graylog.jar:?]
at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.onEvent(ProcessBufferProcessor.java:74) [graylog.jar:?]
at org.graylog2.shared.buffers.processors.ProcessBufferProcessor.onEvent(ProcessBufferProcessor.java:42) [graylog.jar:?]
at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:143) [graylog.jar:?]
at com.codahale.metrics.InstrumentedThreadFactory$InstrumentedRunnable.run(InstrumentedThreadFactory.java:66) [graylog.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
how can we get original data to find the error?
ANSWER
Answered 2021-Jan-27 at 15:13Finally solved. According to documentation
GELF TCP does not support compression due to the use of the null byte (\0) as frame delimiter.
So after disabling compress on the log4j2
configuration we saw our log on the gray log server. The below code snippet is a working example
QUESTION
I am running Apache Ignite .Net in a Kubernetes cluster on Linux nodes.
Recently I updated my ignite 2.8.1 cluster to v2.9. After the update some of the services being parts of the cluster fail to start up with the following message:
*** stack smashing detected ***: terminated
Interestingly, most often it happens with the 2nd instances of the same microservice. The first instances usually start up successfully (but sometimes the first instances fail, too). Another observation is that it happens to the nodes which publish Service Grid services. Sometimes a full cluster recycle (killing all the nodes then spinning them up again) helps to get all the nodes to start up, sometimes not.
Did I mess up something during the update? What should I check first of all?
Below is an excerpt from the Ignite log.
2020-12-08 22:05:25,683 [1] DEBUG [(null)] - Classpath resolved to: /app/libs/spring-jdbc-4.3.26.RELEASE.jar;/app/libs/spring-messaging-4.3.29.RELEASE.jar;/app/libs/ignite-indexing-2.9.0.jar;/app/libs/opencensus-impl-core-0.22.0.jar;/app/libs/jackson-annotations-2.10.1.jar;/app/libs/lucene-analyzers-common-7.4.0.jar;/app/libs/jackson-dataformat-smile-2.10.1.jar;/app/libs/commons-logging-1.1.1.jar;/app/libs/spring-context-4.3.26.RELEASE.jar;/app/libs/tyrus-standalone-client-1.15.jar;/app/libs/jackson-core-2.10.1.jar;/app/libs/spring-core-4.3.29.RELEASE.jar;/app/libs/control-center-agent-2.9.0.0.jar;/app/libs/commons-codec-1.11.jar;/app/libs/disruptor-3.4.2.jar;/app/libs/javassist-3.21.0-GA.jar;/app/libs/spring-tx-4.3.26.RELEASE.jar;/app/libs/spring-core-4.3.26.RELEASE.jar;/app/libs/commons-logging-1.2.jar;/app/libs/spring-beans-4.3.26.RELEASE.jar;/app/libs/h2-1.4.197.jar;/app/libs/ignite-core-2.9.0.jar;/app/libs/spring-aop-4.3.26.RELEASE.jar;/app/libs/reflections8-0.11.7.jar;/app/libs/cache-api-1.0.0.jar;/app/libs/spring-websocket-4.3.29.RELEASE.jar;/app/libs/lucene-core-7.4.0.jar;/app/libs/jackson-databind-2.10.1.jar;/app/libs/ignite-spring-2.9.0.jar;/app/libs/grpc-context-1.19.0.jar;/app/libs/lucene-queryparser-7.4.0.jar;/app/libs/spring-web-4.3.29.RELEASE.jar;/app/libs/ignite-shmem-1.0.0.jar;/app/libs/guava-26.0-android.jar;/app/libs/spring-expression-4.3.26.RELEASE.jar:/app/libs/spring-jdbc-4.3.26.RELEASE.jar:/app/libs/spring-messaging-4.3.29.RELEASE.jar:/app/libs/ignite-indexing-2.9.0.jar:/app/libs/opencensus-impl-core-0.22.0.jar:/app/libs/jackson-annotations-2.10.1.jar:/app/libs/lucene-analyzers-common-7.4.0.jar:/app/libs/jackson-dataformat-smile-2.10.1.jar:/app/libs/commons-logging-1.1.1.jar:/app/libs/spring-context-4.3.26.RELEASE.jar:/app/libs/tyrus-standalone-client-1.15.jar:/app/libs/jackson-core-2.10.1.jar:/app/libs/spring-core-4.3.29.RELEASE.jar:/app/libs/control-center-agent-2.9.0.0.jar:/app/libs/commons-codec-1.11.jar:/app/libs/disruptor-3.4.2.jar:/app/libs/javassist-3.21.0-GA.jar:/app/libs/spring-tx-4.3.26.RELEASE.jar:/app/libs/spring-core-4.3.26.RELEASE.jar:/app/libs/commons-logging-1.2.jar:/app/libs/spring-beans-4.3.26.RELEASE.jar:/app/libs/h2-1.4.197.jar:/app/libs/ignite-core-2.9.0.jar:/app/libs/spring-aop-4.3.26.RELEASE.jar:/app/libs/reflections8-0.11.7.jar:/app/libs/cache-api-1.0.0.jar:/app/libs/spring-websocket-4.3.29.RELEASE.jar:/app/libs/lucene-core-7.4.0.jar:/app/libs/jackson-databind-2.10.1.jar:/app/libs/ignite-spring-2.9.0.jar:/app/libs/grpc-context-1.19.0.jar:/app/libs/lucene-queryparser-7.4.0.jar:/app/libs/spring-web-4.3.29.RELEASE.jar:/app/libs/ignite-shmem-1.0.0.jar:/app/libs/guava-26.0-android.jar:/app/libs/spring-expression-4.3.26.RELEASE.jar:
2020-12-08 22:05:25,860 [1] DEBUG [(null)] - JVM started.
[22:05:26,184][INFO][main][XmlBeanDefinitionReader] Loading XML bean definitions from URL [file:/app/./kubernetes.config
...
2020-12-08 22:05:37,936 [70] INFO org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander [(null)] - Completed rebalance future: RebalanceFuture [state=STARTED, grp=CacheGroupContext [grp=ignite-sys-cache], topVer=AffinityTopologyVersion [topVer=82, minorTopVer=0], rebalanceId=1, routines=4, receivedBytes=1200, receivedKeys=0, partitionsLeft=0, startTime=1607465137846, endTime=-1, lastCancelledTime=-1, next=null]
2020-12-08 22:05:37,936 [70] DEBUG org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander [(null)] - Partitions have been scheduled to resend [reason=Rebalance is done, grp=ignite-sys-cache]
2020-12-08 22:05:37,937 [70] DEBUG org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander [(null)] - Finished rebalancing partition: [grp=ignite-sys-cache, topVer=AffinityTopologyVersion [topVer=82, minorTopVer=0], supplier=12ca76f0-3286-4779-a426-408d5d6cf226, p=61]
2020-12-08 22:05:37,937 [70] DEBUG org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander [(null)] - Will not request next demand message [grp=ignite-sys-cache, topVer=AffinityTopologyVersion [topVer=82, minorTopVer=0], supplier=12ca76f0-3286-4779-a426-408d5d6cf226, rebalanceFuture=RebalanceFuture [state=STARTED, grp=CacheGroupContext [grp=ignite-sys-cache], topVer=AffinityTopologyVersion [topVer=82, minorTopVer=0], rebalanceId=1, routines=4, receivedBytes=1200, receivedKeys=0, partitionsLeft=0, startTime=1607465137846, endTime=1607465137937, lastCancelledTime=-1, next=null]]
2020-12-08 22:05:37,943 [71] DEBUG org.apache.ignite.internal.processors.odbc.ClientListenerProcessor [(null)] - Grid runnable started: nio-acceptor-client-listener
2020-12-08 22:05:37,944 [72] DEBUG org.apache.ignite.internal.processors.odbc.ClientListenerProcessor [(null)] - Grid runnable started: grid-nio-worker-client-listener-0
2020-12-08 22:05:37,944 [1] DEBUG org.apache.ignite.internal.processors.service.IgniteServiceProcessor [(null)] - Started service processor.
2020-12-08 22:05:37,954 [73] DEBUG org.apache.ignite.internal.processors.service.ServiceDeploymentManager [(null)] - Grid runnable started: services-deployment-worker
2020-12-08 22:05:37,955 [73] DEBUG org.apache.ignite.internal.processors.service.ServiceDeploymentTask [(null)] - Started services deployment task init: [depId=ServiceDeploymentProcessId [topVer=AffinityTopologyVersion [topVer=81, minorTopVer=0], reqId=null], locId=c894369e-d55b-4d7b-8e5e-c990d0547121, evt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=c894369e-d55b-4d7b-8e5e-c990d0547121, consistentId=product-service-deployment-7c69d99ff6-vc6nb, addrs=ArrayList [10.0.2.27, 127.0.0.1], sockAddrs=HashSet [/127.0.0.1:47500, product-service-deployment-7c69d99ff6-vc6nb/10.0.2.27:47500], discPort=47500, order=81, intOrder=44, lastExchangeTime=1607465137554, loc=true, ver=2.9.0#20201015-sha1:70742da8, isClient=false], topVer=81, msgTemplate=null, span=org.apache.ignite.internal.processors.tracing.NoopSpan@3f4cf36, nodeId8=c894369e, msg=null, type=NODE_JOINED, tstamp=1607465136027]]
2020-12-08 22:05:38,017 [73] DEBUG org.apache.ignite.internal.processors.resource.GridResourceProcessor [(null)] - Injecting resources [obj=org.apache.ignite.internal.processors.platform.cluster.PlatformClusterNodeFilterImpl@5d421915]
2020-12-08 22:05:38,038 [1] DEBUG org.apache.ignite.internal.processors.rest.GridRestProcessor [(null)] - REST processor started.
2020-12-08 22:05:38,056 [74] DEBUG org.apache.ignite.internal.processors.rest.GridRestProcessor [(null)] - Grid runnable started: session-timeout-worker
2020-12-08 22:05:38,098 [32] DEBUG org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor [(null)] - Timeout has occurred [obj=CancelableTask [id=d5e43644671-3ea29289-4345-4d80-8eab-97397473a5a9, endTime=1607465138070, period=10000, cancel=false, task=org.apache.ignite.internal.processors.query.h2.ConnectionManager$$Lambda$307/57085696@6197e588], process=true]
2020-12-08 22:05:38,110 [1] DEBUG org.apache.ignite.internal.processors.resource.GridResourceProcessor [(null)] - Injecting resources [obj=org.gridgain.control.agent.processor.lifecycle.ClusterLifecycleProcessor$$Lambda$586/893320639@55cff952]
2020-12-08 22:05:38,142 [75] DEBUG org.apache.ignite.internal.managers.communication.GridIoManager [(null)] - Message set has not been changed: GridCommunicationMessageSet [nodeId=3f89e86c-f636-4324-895b-1a77cec8ed11, endTime=1607465141249, timeoutId=8fe43644671-3ea29289-4345-4d80-8eab-97397473a5a9, topic=TOPIC_COMM_USER, plc=0, msgs=ConcurrentLinkedDeque [], reserved=false, timeout=5000, skipOnTimeout=true, lastTs=1607465136249]
2020-12-08 22:05:38,148 [1] WARN org.gridgain.control.agent.ControlCenterAgent [(null)] - Current Ignite configuration does not support tracing functionality and Control Center agent will not collect traces (consider adding ignite-opencensus module to classpath).
2020-12-08 22:05:38,152 [1] DEBUG org.apache.ignite.internal.processors.resource.GridResourceProcessor [(null)] - Injecting resources [obj=org.gridgain.control.agent.ControlCenterAgent$$Lambda$591/1985869725@151335cb]
2020-12-08 22:05:38,175 [76] DEBUG org.apache.ignite.internal.managers.communication.GridIoManager [(null)] - Message set has not been changed: GridCommunicationMessageSet [nodeId=3f89e86c-f636-4324-895b-1a77cec8ed11, endTime=1607465141249, timeoutId=8fe43644671-3ea29289-4345-4d80-8eab-97397473a5a9, topic=TOPIC_COMM_USER, plc=0, msgs=ConcurrentLinkedDeque [], reserved=false, timeout=5000, skipOnTimeout=true, lastTs=1607465136249]
2020-12-08 22:05:38,476 [73] DEBUG org.apache.ignite.internal.processors.service.ServiceDeploymentTask [(null)] - Calculated service assignment : [srvcId=56296344671-81118589-d216-4762-a835-3df2230389c5, srvcTop={c894369e-d55b-4d7b-8e5e-c990d0547121=1, 3f89e86c-f636-4324-895b-1a77cec8ed11=1}]
2020-12-08 22:05:38,484 [73] DEBUG org.apache.ignite.internal.processors.resource.GridResourceProcessor [(null)] - Injecting resources [obj=org.apache.ignite.internal.processors.platform.dotnet.PlatformDotNetServiceImpl@20119802]
*** stack smashing detected ***: terminated
Thank you!
ANSWER
Answered 2020-Dec-10 at 15:14stack smashing detected
usually indicates a NullReferenceException
in C# code.
Set COMPlus_EnableAlternateStackCheck
environment variable to 1
before running your app to see full stack trace (this works for .NET Core 3.0 and later).
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install disruptor
You can use disruptor like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the disruptor component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page