Support
Quality
Security
License
Reuse
kandi has reviewed flink and discovered the below as its top functions. This is intended to give you an instant insight into flink implemented functionality, and help decide if they suit your requirements.
A streaming-first runtime that supports both batch processing and data streaming programs
Elegant and fluent APIs in Java and Scala
A runtime that supports very high throughput and low event latency at the same time
Support for event time and out-of-order processing in the DataStream API, based on the Dataflow Model
Flexible windowing (time, count, sessions, custom triggers) across different time semantics (event time, processing time)
Fault-tolerance with exactly-once processing guarantees
Natural back-pressure in streaming programs
Libraries for Graph processing (batch), Machine Learning (batch), and Complex Event Processing (streaming)
Built-in support for iterative programs (BSP) in the DataSet (batch) API
Custom memory management for efficient and robust switching between in-memory and out-of-core data processing algorithms
Compatibility layers for Apache Hadoop MapReduce
Integration with YARN, HDFS, HBase, and other components of the Apache Hadoop ecosystem
Streaming Example
case class WordWithCount(word: String, count: Long)
val text = env.socketTextStream(host, port, '\n')
val windowCounts = text.flatMap { w => w.split("\\s") }
.map { w => WordWithCount(w, 1) }
.keyBy("word")
.window(TumblingProcessingTimeWindow.of(Time.seconds(5)))
.sum("count")
windowCounts.print()
Batch Example
case class WordWithCount(word: String, count: Long)
val text = env.readTextFile(path)
val counts = text.flatMap { w => w.split("\\s") }
.map { w => WordWithCount(w, 1) }
.groupBy("word")
.sum("count")
counts.writeAsCsv(outputPath)
Building Apache Flink from Source
git clone https://github.com/apache/flink.git
cd flink
mvn clean package -DskipTests # this will take up to 10 minutes
Flink capped MapState
map[nextIndex++] = thisEvent;
if (nextIndex - oldestIndex > n) {
map[oldestIndex++].clear();
}
javadoc of SingleOutputStreamOperator#returns(TypeHint<T> typeHint) method
public class MyMapFunctionNonInferrableReturnType<T> implements MapFunction<AbstractDataModel, T> {
@Override
public T map(AbstractDataModel value) throws Exception {
return (T) value.getValue();
}
}
public class NonInferrableReturnTypeStreamJob {
private final List<AbstractDataModel> abstractDataModelList;
private final ValenciaSinkFunction sink;
public NonInferrableReturnTypeStreamJob() {
this.abstractDataModelList = new ArrayList<AbstractDataModel>();
this.abstractDataModelList.add(new ConcreteModel("a", "1"));
this.abstractDataModelList.add(new ConcreteModel("a", "2"));
this.sink = new ValenciaSinkFunction();
}
public NonInferrableReturnTypeStreamJob(List<AbstractDataModel> abstractDataModelList, ValenciaSinkFunction sink) {
this.abstractDataModelList = abstractDataModelList;
this.sink = sink;
}
public static void main(String[] args) throws Exception {
NonInferrableReturnTypeStreamJob concreteModelTest = new NonInferrableReturnTypeStreamJob();
concreteModelTest.execute();
}
public void execute() throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.fromCollection(this.abstractDataModelList)
.map(new MyMapFunctionNonInferrableReturnType())
.returns(TypeInformation.of(String.class))
.addSink(sink);
env.execute();
}
}
import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration;
import org.apache.flink.test.util.MiniClusterWithClientResource;
import org.junit.ClassRule;
import org.junit.Test;
import org.sense.flink.examples.stream.valencia.ValenciaSinkFunction;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import static junit.framework.TestCase.assertEquals;
import static org.junit.Assert.assertTrue;
public class NonInferrableReturnTypeStreamJobTest {
@ClassRule
public static MiniClusterWithClientResource flinkCluster;
private final int minAvailableProcessors = 4;
private final boolean runInParallel;
public NonInferrableReturnTypeStreamJobTest() {
int availableProcessors = Runtime.getRuntime().availableProcessors();
this.runInParallel = availableProcessors >= minAvailableProcessors;
if (this.runInParallel) {
flinkCluster = new MiniClusterWithClientResource(
new MiniClusterResourceConfiguration.Builder()
.setNumberSlotsPerTaskManager(minAvailableProcessors)
.setNumberTaskManagers(1)
.build());
}
}
@Test
public void execute() throws Exception {
List<AbstractDataModel> abstractDataModelList = new ArrayList<AbstractDataModel>();
abstractDataModelList.add(new ConcreteModel("a", "1"));
abstractDataModelList.add(new ConcreteModel("a", "2"));
ValenciaSinkFunction.values.clear();
NonInferrableReturnTypeStreamJob streamJob = new NonInferrableReturnTypeStreamJob(abstractDataModelList, new ValenciaSinkFunction());
streamJob.execute();
List<String> results = ValenciaSinkFunction.values;
assertEquals(2, results.size());
assertTrue(results.containsAll(Arrays.asList("1", "2")));
}
}
-----------------------
public class MyMapFunctionNonInferrableReturnType<T> implements MapFunction<AbstractDataModel, T> {
@Override
public T map(AbstractDataModel value) throws Exception {
return (T) value.getValue();
}
}
public class NonInferrableReturnTypeStreamJob {
private final List<AbstractDataModel> abstractDataModelList;
private final ValenciaSinkFunction sink;
public NonInferrableReturnTypeStreamJob() {
this.abstractDataModelList = new ArrayList<AbstractDataModel>();
this.abstractDataModelList.add(new ConcreteModel("a", "1"));
this.abstractDataModelList.add(new ConcreteModel("a", "2"));
this.sink = new ValenciaSinkFunction();
}
public NonInferrableReturnTypeStreamJob(List<AbstractDataModel> abstractDataModelList, ValenciaSinkFunction sink) {
this.abstractDataModelList = abstractDataModelList;
this.sink = sink;
}
public static void main(String[] args) throws Exception {
NonInferrableReturnTypeStreamJob concreteModelTest = new NonInferrableReturnTypeStreamJob();
concreteModelTest.execute();
}
public void execute() throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.fromCollection(this.abstractDataModelList)
.map(new MyMapFunctionNonInferrableReturnType())
.returns(TypeInformation.of(String.class))
.addSink(sink);
env.execute();
}
}
import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration;
import org.apache.flink.test.util.MiniClusterWithClientResource;
import org.junit.ClassRule;
import org.junit.Test;
import org.sense.flink.examples.stream.valencia.ValenciaSinkFunction;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import static junit.framework.TestCase.assertEquals;
import static org.junit.Assert.assertTrue;
public class NonInferrableReturnTypeStreamJobTest {
@ClassRule
public static MiniClusterWithClientResource flinkCluster;
private final int minAvailableProcessors = 4;
private final boolean runInParallel;
public NonInferrableReturnTypeStreamJobTest() {
int availableProcessors = Runtime.getRuntime().availableProcessors();
this.runInParallel = availableProcessors >= minAvailableProcessors;
if (this.runInParallel) {
flinkCluster = new MiniClusterWithClientResource(
new MiniClusterResourceConfiguration.Builder()
.setNumberSlotsPerTaskManager(minAvailableProcessors)
.setNumberTaskManagers(1)
.build());
}
}
@Test
public void execute() throws Exception {
List<AbstractDataModel> abstractDataModelList = new ArrayList<AbstractDataModel>();
abstractDataModelList.add(new ConcreteModel("a", "1"));
abstractDataModelList.add(new ConcreteModel("a", "2"));
ValenciaSinkFunction.values.clear();
NonInferrableReturnTypeStreamJob streamJob = new NonInferrableReturnTypeStreamJob(abstractDataModelList, new ValenciaSinkFunction());
streamJob.execute();
List<String> results = ValenciaSinkFunction.values;
assertEquals(2, results.size());
assertTrue(results.containsAll(Arrays.asList("1", "2")));
}
}
-----------------------
public class MyMapFunctionNonInferrableReturnType<T> implements MapFunction<AbstractDataModel, T> {
@Override
public T map(AbstractDataModel value) throws Exception {
return (T) value.getValue();
}
}
public class NonInferrableReturnTypeStreamJob {
private final List<AbstractDataModel> abstractDataModelList;
private final ValenciaSinkFunction sink;
public NonInferrableReturnTypeStreamJob() {
this.abstractDataModelList = new ArrayList<AbstractDataModel>();
this.abstractDataModelList.add(new ConcreteModel("a", "1"));
this.abstractDataModelList.add(new ConcreteModel("a", "2"));
this.sink = new ValenciaSinkFunction();
}
public NonInferrableReturnTypeStreamJob(List<AbstractDataModel> abstractDataModelList, ValenciaSinkFunction sink) {
this.abstractDataModelList = abstractDataModelList;
this.sink = sink;
}
public static void main(String[] args) throws Exception {
NonInferrableReturnTypeStreamJob concreteModelTest = new NonInferrableReturnTypeStreamJob();
concreteModelTest.execute();
}
public void execute() throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.fromCollection(this.abstractDataModelList)
.map(new MyMapFunctionNonInferrableReturnType())
.returns(TypeInformation.of(String.class))
.addSink(sink);
env.execute();
}
}
import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration;
import org.apache.flink.test.util.MiniClusterWithClientResource;
import org.junit.ClassRule;
import org.junit.Test;
import org.sense.flink.examples.stream.valencia.ValenciaSinkFunction;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import static junit.framework.TestCase.assertEquals;
import static org.junit.Assert.assertTrue;
public class NonInferrableReturnTypeStreamJobTest {
@ClassRule
public static MiniClusterWithClientResource flinkCluster;
private final int minAvailableProcessors = 4;
private final boolean runInParallel;
public NonInferrableReturnTypeStreamJobTest() {
int availableProcessors = Runtime.getRuntime().availableProcessors();
this.runInParallel = availableProcessors >= minAvailableProcessors;
if (this.runInParallel) {
flinkCluster = new MiniClusterWithClientResource(
new MiniClusterResourceConfiguration.Builder()
.setNumberSlotsPerTaskManager(minAvailableProcessors)
.setNumberTaskManagers(1)
.build());
}
}
@Test
public void execute() throws Exception {
List<AbstractDataModel> abstractDataModelList = new ArrayList<AbstractDataModel>();
abstractDataModelList.add(new ConcreteModel("a", "1"));
abstractDataModelList.add(new ConcreteModel("a", "2"));
ValenciaSinkFunction.values.clear();
NonInferrableReturnTypeStreamJob streamJob = new NonInferrableReturnTypeStreamJob(abstractDataModelList, new ValenciaSinkFunction());
streamJob.execute();
List<String> results = ValenciaSinkFunction.values;
assertEquals(2, results.size());
assertTrue(results.containsAll(Arrays.asList("1", "2")));
}
}
toChanglelogStream prints different kinds of changes
// update aggregate result and set to the newRow
if (isAccumulateMsg(input)) {
// accumulate input
function.accumulate(input);
} else {
// retract input
function.retract(input);
}
} else {
// we retracted the last record for this key
// sent out a delete message
if (!firstRow) {
// prepare delete message for previous row
resultRow.replace(currentKey, prevAggValue).setRowKind(RowKind.DELETE);
out.collect(resultRow);
}
// and clear all state
accState.clear();
// cleanup dataview under current key
function.cleanup();
}
if (!recordCounter.recordCountIsZero(accumulators)) {
// we aggregated at least one record for this key
// update the state
accState.update(accumulators);
// if this was not the first row and we have to emit retractions
if (!firstRow) {
if (stateRetentionTime <= 0 && equaliser.equals(prevAggValue, newAggValue)) {
// newRow is the same as before and state cleaning is not enabled.
// We do not emit retraction and acc message.
// If state cleaning is enabled, we have to emit messages to prevent too early
// state eviction of downstream operators.
return;
} else {
// retract previous result
if (generateUpdateBefore) {
// prepare UPDATE_BEFORE message for previous row
resultRow
.replace(currentKey, prevAggValue)
.setRowKind(RowKind.UPDATE_BEFORE);
out.collect(resultRow);
}
// prepare UPDATE_AFTER message for new row
resultRow.replace(currentKey, newAggValue).setRowKind(RowKind.UPDATE_AFTER);
}
-----------------------
// update aggregate result and set to the newRow
if (isAccumulateMsg(input)) {
// accumulate input
function.accumulate(input);
} else {
// retract input
function.retract(input);
}
} else {
// we retracted the last record for this key
// sent out a delete message
if (!firstRow) {
// prepare delete message for previous row
resultRow.replace(currentKey, prevAggValue).setRowKind(RowKind.DELETE);
out.collect(resultRow);
}
// and clear all state
accState.clear();
// cleanup dataview under current key
function.cleanup();
}
if (!recordCounter.recordCountIsZero(accumulators)) {
// we aggregated at least one record for this key
// update the state
accState.update(accumulators);
// if this was not the first row and we have to emit retractions
if (!firstRow) {
if (stateRetentionTime <= 0 && equaliser.equals(prevAggValue, newAggValue)) {
// newRow is the same as before and state cleaning is not enabled.
// We do not emit retraction and acc message.
// If state cleaning is enabled, we have to emit messages to prevent too early
// state eviction of downstream operators.
return;
} else {
// retract previous result
if (generateUpdateBefore) {
// prepare UPDATE_BEFORE message for previous row
resultRow
.replace(currentKey, prevAggValue)
.setRowKind(RowKind.UPDATE_BEFORE);
out.collect(resultRow);
}
// prepare UPDATE_AFTER message for new row
resultRow.replace(currentKey, newAggValue).setRowKind(RowKind.UPDATE_AFTER);
}
-----------------------
// update aggregate result and set to the newRow
if (isAccumulateMsg(input)) {
// accumulate input
function.accumulate(input);
} else {
// retract input
function.retract(input);
}
} else {
// we retracted the last record for this key
// sent out a delete message
if (!firstRow) {
// prepare delete message for previous row
resultRow.replace(currentKey, prevAggValue).setRowKind(RowKind.DELETE);
out.collect(resultRow);
}
// and clear all state
accState.clear();
// cleanup dataview under current key
function.cleanup();
}
if (!recordCounter.recordCountIsZero(accumulators)) {
// we aggregated at least one record for this key
// update the state
accState.update(accumulators);
// if this was not the first row and we have to emit retractions
if (!firstRow) {
if (stateRetentionTime <= 0 && equaliser.equals(prevAggValue, newAggValue)) {
// newRow is the same as before and state cleaning is not enabled.
// We do not emit retraction and acc message.
// If state cleaning is enabled, we have to emit messages to prevent too early
// state eviction of downstream operators.
return;
} else {
// retract previous result
if (generateUpdateBefore) {
// prepare UPDATE_BEFORE message for previous row
resultRow
.replace(currentKey, prevAggValue)
.setRowKind(RowKind.UPDATE_BEFORE);
out.collect(resultRow);
}
// prepare UPDATE_AFTER message for new row
resultRow.replace(currentKey, newAggValue).setRowKind(RowKind.UPDATE_AFTER);
}
Got "pyflink.util.exceptions.TableException: findAndCreateTableSource failed." when running PyFlink example
t_env = BatchTableEnvironment.create(
environment_settings=EnvironmentSettings.new_instance()
.in_batch_mode().use_blink_planner().build())
t_env._j_tenv.getPlanner().getExecEnv().setParallelism(1)
my_source_ddl = """
create table mySource (
word VARCHAR
) with (
'connector' = 'filesystem',
'format' = 'csv',
'path' = '/tmp/input'
)
"""
my_sink_ddl = """
create table mySink (
word VARCHAR,
`count` BIGINT
) with (
'connector' = 'filesystem',
'format' = 'csv',
'path' = '/tmp/output'
)
"""
t_env.execute_sql(my_source_ddl)
t_env.execute_sql(my_sink_ddl)
tab = t_env.from_path('mySource')
tab.group_by(tab.word) \
.select(tab.word, lit(1).count) \
.execute_insert('mySink').wait()
How to force Apache Flink using a modified operator placement?
.map(...).startNewChain().slotSharingGroup("exceptional")
// or
.filter(...).startNewChain().slotSharingGroup("default")
How do I handle out-of-order events with Apache flink?
yourStream.keyBy(...lookup p1 or p2 here...).process(new YourProcessFunction())
-----------------------
SELECT *
FROM event
MATCH_RECOGNIZE (
PARTITION BY particleId
ORDER BY ts
MEASURES
b.ts,
b.particleId,
velocity(a, b)
AFTER MATCH SKIP TO NEXT ROW
PATTERN (a b)
DEFINE
a AS TRUE,
b AS TRUE
)
Flink job with CassandrSink fails with Error writing
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-cassandra_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
[INFO] +- org.apache.flink:flink-connector-cassandra_2.12:jar:1.10.0:provided
[INFO] | +- (io.dropwizard.metrics:metrics-core:jar:3.1.2:provided - omitted for conflict with 3.1.5)
[INFO] | \- (org.apache.flink:force-shading:jar:1.10.0:provided - omitted for duplicate)
-----------------------
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-cassandra_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
[INFO] +- org.apache.flink:flink-connector-cassandra_2.12:jar:1.10.0:provided
[INFO] | +- (io.dropwizard.metrics:metrics-core:jar:3.1.2:provided - omitted for conflict with 3.1.5)
[INFO] | \- (org.apache.flink:force-shading:jar:1.10.0:provided - omitted for duplicate)
Flink co group outer join fails with High Backpressure
public class Aggregator {
private FlatHighFrequencyAnalog flatHighFrequencyAnalog;
private EVWindow evWindow;
public Aggregator(FlatHighFrequencyAnalog flatHighFrequencyAnalog) {
this.flatHighFrequencyAnalog = flatHighFrequencyAnalog;
}
public Aggregator(EVWindow evWindow) {
this.evWindow = evWindow;
}
public FlatHighFrequencyAnalog getFlatHighFrequencyAnalog() {
return flatHighFrequencyAnalog;
}
public EVWindow getEVWindow() {
return evWindow;
}
}
SingleOutputStreamOperator<Either<EVWindow, FlatHighFrequencyAnalog>> stream1 =
environment
.addSource(createHFAConsumer())
.map(hfa -> Either.Left(hfa));
SingleOutputStreamOperator<Either<EVWindow, FlatHighFrequencyAnalog>> stream2 =
environment
.addSource(createHFDConsumer())
.map(hfd -> Either.Right(hfd));
DataStream<Message> pStream =
stream1
.union(stream2)
.assignTimestampsAndWatermarks(
WatermarkStrategy
.<Either<EVWindow, FlatHighFrequencyAnalog>>forBoundedOutOfOrderness(
ofSeconds(MAX_OUT_OF_ORDERNESS))
.withTimestampAssigner((input, timestamp) -> input.isLeft() ? input.left().getTimeStamp() : input.right().getTimeStamp()))
.keyBy(value -> value.isLeft() ? value.left().getId() : value.right().getId())
.window(TumblingEventTimeWindows.of(Time.minutes(MINUTES)))
.process(new ProcessWindowFunction());
List<EVWindow> evWindows =
Streams.stream(elements)
.filter(Either::isLeft)
.map(Either::left)
.collect(Collectors.toList());
List<FlatHighFrequencyAnalog> highFrequencyAnalogs =
Streams.stream(elements)
.filter(Either::isRight)
.map(Either::right)
.collect(Collectors.toList());
-----------------------
public class Aggregator {
private FlatHighFrequencyAnalog flatHighFrequencyAnalog;
private EVWindow evWindow;
public Aggregator(FlatHighFrequencyAnalog flatHighFrequencyAnalog) {
this.flatHighFrequencyAnalog = flatHighFrequencyAnalog;
}
public Aggregator(EVWindow evWindow) {
this.evWindow = evWindow;
}
public FlatHighFrequencyAnalog getFlatHighFrequencyAnalog() {
return flatHighFrequencyAnalog;
}
public EVWindow getEVWindow() {
return evWindow;
}
}
SingleOutputStreamOperator<Either<EVWindow, FlatHighFrequencyAnalog>> stream1 =
environment
.addSource(createHFAConsumer())
.map(hfa -> Either.Left(hfa));
SingleOutputStreamOperator<Either<EVWindow, FlatHighFrequencyAnalog>> stream2 =
environment
.addSource(createHFDConsumer())
.map(hfd -> Either.Right(hfd));
DataStream<Message> pStream =
stream1
.union(stream2)
.assignTimestampsAndWatermarks(
WatermarkStrategy
.<Either<EVWindow, FlatHighFrequencyAnalog>>forBoundedOutOfOrderness(
ofSeconds(MAX_OUT_OF_ORDERNESS))
.withTimestampAssigner((input, timestamp) -> input.isLeft() ? input.left().getTimeStamp() : input.right().getTimeStamp()))
.keyBy(value -> value.isLeft() ? value.left().getId() : value.right().getId())
.window(TumblingEventTimeWindows.of(Time.minutes(MINUTES)))
.process(new ProcessWindowFunction());
List<EVWindow> evWindows =
Streams.stream(elements)
.filter(Either::isLeft)
.map(Either::left)
.collect(Collectors.toList());
List<FlatHighFrequencyAnalog> highFrequencyAnalogs =
Streams.stream(elements)
.filter(Either::isRight)
.map(Either::right)
.collect(Collectors.toList());
-----------------------
public class Aggregator {
private FlatHighFrequencyAnalog flatHighFrequencyAnalog;
private EVWindow evWindow;
public Aggregator(FlatHighFrequencyAnalog flatHighFrequencyAnalog) {
this.flatHighFrequencyAnalog = flatHighFrequencyAnalog;
}
public Aggregator(EVWindow evWindow) {
this.evWindow = evWindow;
}
public FlatHighFrequencyAnalog getFlatHighFrequencyAnalog() {
return flatHighFrequencyAnalog;
}
public EVWindow getEVWindow() {
return evWindow;
}
}
SingleOutputStreamOperator<Either<EVWindow, FlatHighFrequencyAnalog>> stream1 =
environment
.addSource(createHFAConsumer())
.map(hfa -> Either.Left(hfa));
SingleOutputStreamOperator<Either<EVWindow, FlatHighFrequencyAnalog>> stream2 =
environment
.addSource(createHFDConsumer())
.map(hfd -> Either.Right(hfd));
DataStream<Message> pStream =
stream1
.union(stream2)
.assignTimestampsAndWatermarks(
WatermarkStrategy
.<Either<EVWindow, FlatHighFrequencyAnalog>>forBoundedOutOfOrderness(
ofSeconds(MAX_OUT_OF_ORDERNESS))
.withTimestampAssigner((input, timestamp) -> input.isLeft() ? input.left().getTimeStamp() : input.right().getTimeStamp()))
.keyBy(value -> value.isLeft() ? value.left().getId() : value.right().getId())
.window(TumblingEventTimeWindows.of(Time.minutes(MINUTES)))
.process(new ProcessWindowFunction());
List<EVWindow> evWindows =
Streams.stream(elements)
.filter(Either::isLeft)
.map(Either::left)
.collect(Collectors.toList());
List<FlatHighFrequencyAnalog> highFrequencyAnalogs =
Streams.stream(elements)
.filter(Either::isRight)
.map(Either::right)
.collect(Collectors.toList());
Cannot change TEdit Text in Delphi
function TFormBase.addStringField(AField: TFieldBase; ATab: TTabsNames): TFormBase;
var
...
FEdit: TEdit;
FEvent: TEventBeforeInsert;
begin
...
FEdit := TEdit.Create(Self);
FEdit.Name := 'SomeUniqueNameHere'; // <-- for you to decide on...
...
FEvent := AField.EventBeforeInsert;
if Assigned(FEvent) then
FDictionary.Add(FEdit.Name, FEvent);
end;
TFormBase = class(TForm)
private
type TEventBeforeInsertPair = TPair<TComponent, TEventBeforeInsert>;
FBeforeInsertEvents: TList<TEventBeforeInsertPair>;
...
public
constructor Create;
destructor Destroy; override;
...
end;
...
constructor TFormBase.Create;
begin
inherited;
FBeforeInsertEvents := TList<TEventBeforeInsertPair>.Create;
end;
destructor TFormBase.Destroy;
begin
FBeforeInsertEvents.Free;
inherited;
end;
function TFormBase.addStringField(AField: TFieldBase; ATab: TTabsNames): TFormBase;
var
...
FEdit: TEdit;
FEvent: TEventBeforeInsert;
begin
...
FEdit := TEdit.Create(Self);
...
FEvent := AField.EventBeforeInsert;
if Assigned(FEvent) then
FBeforeInsertEvents.Add(TEventBeforeInsertPair.Create(FEdit, FEvent));
end;
procedure TFormBase.rectInsertClick(Sender: TObject);
var
Item: TEventBeforeInsertPair;
begin
for Item in FBeforeInsertEvents do
begin
if not Item.Value(Item.Key, Self) then
Exit;
end;
end;
...
TEventBeforeInsert = function(AComponent: TComponent; AForm: TForm): Boolean of Object;
-----------------------
function TFormBase.addStringField(AField: TFieldBase; ATab: TTabsNames): TFormBase;
var
...
FEdit: TEdit;
FEvent: TEventBeforeInsert;
begin
...
FEdit := TEdit.Create(Self);
FEdit.Name := 'SomeUniqueNameHere'; // <-- for you to decide on...
...
FEvent := AField.EventBeforeInsert;
if Assigned(FEvent) then
FDictionary.Add(FEdit.Name, FEvent);
end;
TFormBase = class(TForm)
private
type TEventBeforeInsertPair = TPair<TComponent, TEventBeforeInsert>;
FBeforeInsertEvents: TList<TEventBeforeInsertPair>;
...
public
constructor Create;
destructor Destroy; override;
...
end;
...
constructor TFormBase.Create;
begin
inherited;
FBeforeInsertEvents := TList<TEventBeforeInsertPair>.Create;
end;
destructor TFormBase.Destroy;
begin
FBeforeInsertEvents.Free;
inherited;
end;
function TFormBase.addStringField(AField: TFieldBase; ATab: TTabsNames): TFormBase;
var
...
FEdit: TEdit;
FEvent: TEventBeforeInsert;
begin
...
FEdit := TEdit.Create(Self);
...
FEvent := AField.EventBeforeInsert;
if Assigned(FEvent) then
FBeforeInsertEvents.Add(TEventBeforeInsertPair.Create(FEdit, FEvent));
end;
procedure TFormBase.rectInsertClick(Sender: TObject);
var
Item: TEventBeforeInsertPair;
begin
for Item in FBeforeInsertEvents do
begin
if not Item.Value(Item.Key, Self) then
Exit;
end;
end;
...
TEventBeforeInsert = function(AComponent: TComponent; AForm: TForm): Boolean of Object;
-----------------------
function TFormBase.addStringField(AField: TFieldBase; ATab: TTabsNames): TFormBase;
var
...
FEdit: TEdit;
FEvent: TEventBeforeInsert;
begin
...
FEdit := TEdit.Create(Self);
FEdit.Name := 'SomeUniqueNameHere'; // <-- for you to decide on...
...
FEvent := AField.EventBeforeInsert;
if Assigned(FEvent) then
FDictionary.Add(FEdit.Name, FEvent);
end;
TFormBase = class(TForm)
private
type TEventBeforeInsertPair = TPair<TComponent, TEventBeforeInsert>;
FBeforeInsertEvents: TList<TEventBeforeInsertPair>;
...
public
constructor Create;
destructor Destroy; override;
...
end;
...
constructor TFormBase.Create;
begin
inherited;
FBeforeInsertEvents := TList<TEventBeforeInsertPair>.Create;
end;
destructor TFormBase.Destroy;
begin
FBeforeInsertEvents.Free;
inherited;
end;
function TFormBase.addStringField(AField: TFieldBase; ATab: TTabsNames): TFormBase;
var
...
FEdit: TEdit;
FEvent: TEventBeforeInsert;
begin
...
FEdit := TEdit.Create(Self);
...
FEvent := AField.EventBeforeInsert;
if Assigned(FEvent) then
FBeforeInsertEvents.Add(TEventBeforeInsertPair.Create(FEdit, FEvent));
end;
procedure TFormBase.rectInsertClick(Sender: TObject);
var
Item: TEventBeforeInsertPair;
begin
for Item in FBeforeInsertEvents do
begin
if not Item.Value(Item.Key, Self) then
Exit;
end;
end;
...
TEventBeforeInsert = function(AComponent: TComponent; AForm: TForm): Boolean of Object;
Can Flink OutputTag be reused?
final OutputTag<String> errors = new OutputTag<String>("errors"){};
SingleOutputStreamOperator<Integer> task1 = ...;
SingleOutputStreamOperator<Integer> task2 = ...;
SingleOutputStreamOperator<Integer> task3 = ...;
DataStream<String> exceptions1 = task1.getSideOutput(errors);
DataStream<String> exceptions2 = task2.getSideOutput(errors);
DataStream<String> exceptions3 = task3.getSideOutput(errors);
DataStream<String> exceptions = exceptions1.union(exceptions2, exceptions3);
exceptions.addSink(new FlinkKafkaProducer(...));
QUESTION
Flink capped MapState
Asked 2022-Apr-08 at 09:03We want to keep in a Flink operator's state the last n
unique id's.
When the n+1
unique id arrives, we want to keep it and drop the oldest unique id in the state. This is in order to avoid an ever-growing state.
We already have a TTL (expiration time) mechanism in place. The size limit is another restriction we're looking to put in place.
Not every element holds a unique id.
Does Flink provide an API that limits the number of elements in the state?
MapState
with a StateTtlConfig
generated TTL/expiration mechanism.ANSWER
Answered 2022-Apr-07 at 14:30I don't think Flink has a state type that would support this out of the box. The closest thing I can think of is to use ListState. With ListState you can append elements like you would a regular list.
For your use case, you would read the state, call .get()
which would give you an iterable that you can iterate on, removing the item you'd like to drop and then pushing the state back.
From a performance perspective, the iteration may not be ideal but on the other hand, it would not be significant in comparison to accessing state from disk (in case you're using RocksDB as a state backend) which incurs a heavy cost due to serialization and deserialization.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Save this library and start creating your kit
Explore Related Topics
Save this library and start creating your kit