diff --git a/README.html b/README.html index a0794dd..a0b8e65 100644 --- a/README.html +++ b/README.html @@ -9,8 +9,8 @@
-NOTE: I have been studying a lot about threading for the past few months and have some awesome additions in store! They will take a while to come out though. The goal of the library is still to provide a simple and efficient way to multi task in lua
In Changes you’ll find documentation for(In Order):
My multitasking library for lua. It is a pure lua binding if you ingore the integrations and the love2d compat. If you find any bugs or have any issues please let me know :). If you don’t see a table of contents try using the ReadMe.html file. It is eaiser to navigate the readme
NOTE: I have been studying a lot about threading for the past few months and have some awesome additions in store! They will take a while to come out though. The goal of the library is still to provide a simple and efficient way to multi task in lua
In Changes you’ll find documentation for (In Order):
My multitasking library for lua. It is a pure lua binding if you ignore the integrations and the love2d compat. If you find any bugs or have any issues, please let me know :). If you don’t see a table of contents try using the ReadMe.html file. It is easier to navigate the readme
Note: The latest version of lualanes is required if you want to make use of system threads on lua 5.1+. I will update the dependencies for luarocks since this library should work fine on lua 5.1+
To install copy the multi folder into your enviroment and you are good to go
If you want to use the system threads then you’ll need to install lanes!
or use luarocks
INSTALLINGNote: The latest version of Lua lanes is required if you want to make use of system threads on lua 5.1+. I will update the dependencies for Lua rocks since this library should work fine on lua 5.1+
To install copy the multi folder into your environment and you are good to go
If you want to use the system threads, then you’ll need to install lanes!
or use luarocks
luarocks install bin -- Inorder to use the new save state stuff
+
">luarocks install bin -- To use the new save state stuff
luarocks install multi
-Note: In the near future you may be able to run multitasking code on multiple machines, network paralisim. This however will have to wait until I hammer out some bugs within the core of system threading itself.
See the rambling section to get an idea of how this will work.
For real-time assistance with my libraries! A place where you can ask questions and get help with any of my libraries. Also you can request features and stuff there as well.
https://discord.gg/U8UspuA
Upcoming Plans: Adding network support for threading. Kinda like your own lua cloud. This will require the bin, net, and multi library. Once that happens I will include those libraries as a set. This also means that you can expect both a stand alone and joined versions of the libraries.
In regards to integrations, thread cancellation works slightly different for love2d and lanes. Within love2d I was unable to (To lazy to…) not use the multi library within the thread. A fix for this is to call THREAD.kill() should do the trick from within the thread. A listener could be made to detect when thread kill has been requested and sent to the running thread.multi:Stop() when you are done with your threaded code! This may change however if I find a way to work around this. In love2d in order to mimic the GLOBAL table I needed the library to constantly sync tha data… You can use the sThread.waitFor(varname), or sThread.hold(func) methods to sync the globals, to get the value instead of using GLOBAL and this could work. If you want to go this route I suggest setting multi.isRunning=true to prevent the auto runner from doing its thing! This will make the multi manager no longer function, but thats the point :P
Another bug concerns the SystemThreadedJobQueue, Only 1 can be used for now. Going to change in a future update
And systemThreadedTables only supports 1 table between the main and worker thread! They do not work when shared between 2 or more threads. If you need that much flexiblity ust the GLOBAL table that all threads have. FIXED
For module creators using this library. I suggest using SystemThreadedQueues for data transfer instead of SystemThreadedTables for rapid data transfer, If you plan on having Constants that will always be the same then a table is a good idea! They support up to n threads and can be messed with and abused as much as you want :D FIXED Use what you want!
Love2D SystemThreadedTAbles do not send love2d userdata, use queues instead for that! FIXED
DiscordFor real-time assistance with my libraries! A place where you can ask questions and get help with any of my libraries. Also, you can request features and stuff there as well.
https://discord.gg/U8UspuA
Upcoming Plans: Adding network support for threading. Kind of like your own lua cloud. This will require the bin, net, and multi library. Once that happens I will include those libraries as a set. This also means that you can expect both a standalone and joined versions of the libraries.
Planned features/TODO
+-
Add system threads for love2d that works like the lanesManager (loveManager, slight differences). -
Improve performance of the library -
Improve coroutine based threading scheduling - Improve love2d Idle thread CPU usage/Fix the performance when using system threads in love2d… Tricky Look at the rambling section for insight.
-
Add more control to coroutine based threading - Add more control to system-based threading
- Make practical examples that show how you can solve real problems
-
Add more features to support module creators -
Make a framework for easier thread task distributing -
Fix Error handling on threaded multi objects Non-threaded multiobjs will crash your program if they error though! Use multi:newThread() of multi:newSystemThread() if your code can error! Unless you use multi:protect() this however lowers performance! -
Add multi:OnError(function(obj,err)) - sThread.wrap(obj) May or may not be completed Theory: Allows interaction in one thread to affect it in another. The addition to threaded tables may make this possible!
- SystemThreaded Actors — After some tests I figured out a way to make this work… It will work slightly different though. This is due to the actor needing to be split able…
- Load Balancing for system threads (Once SystemThreaded Actors are done)
-
Add more integrations - Fix SystemThreadedTables
- Finish the wiki stuff. (11% done)
- Test for unknown bugs
Known Bugs/Issues
Regarding integrations, thread cancellation works slightly different for love2d and lanes. Within love2d I was unable to (Too lazy to…) not use the multi library within the thread. A fix for this is to call multi:Stop() when you are done with your threaded code! This may change however if I find a way to work around this. In love2d to mimic the GLOBAL table I needed the library to constantly sync the data… You can use the sThread.waitFor(varname), or sThread.hold(func) methods to sync the global data, to get the value instead of using GLOBAL and this could work. If you want to go this route I suggest setting multi.isRunning=true to prevent the auto runner from doing its thing! This will make the multi manager no longer function, but that’s the point :P THREAD.kill() should do the trick from within the thread. A listener could be made to detect when thread kill has been requested and sent to the running thread.
Another bug concerns the SystemThreadedJobQueue, only 1 can be used for now. Going to change in a future update
And systemThreadedTables only supports 1 table between the main and worker thread! They do not work when shared between 2 or more threads. If you need that much flexibility use the GLOBAL table that all threads have. FIXED
For module creators using this library. I suggest using SystemThreadedQueues for data transfer instead of SystemThreadedTables for rapid data transfer, if you plan on having Constants that will always be the same then a table is a good idea! They support up to n threads and can be messed with and abused as much as you want :D FIXED Use what you want!
Love2D SystemThreadedTables do not send love2d userdata, use queues instead for that! FIXED
Usage:
function-- if n were nil it will reset back to 3, or it would reset to n seconds
end)
multi:mainloop() -- the main loop of the program, multi:umanager() exists as well to allow integration in other loops Ex: love2d love.update function. More on this binding in the wiki!
-
The library is modular so you only need to require what you need to. Because of this, the global enviroment is altered
There are many useful objects that you can use
Check out the wiki for detailed usage, but here are the objects:
-- Process#
- QueueQueuer#
- Alarm
- Loop
- Event
- Step
- Range
- TStep
- TLoop
- Condition
- Connection
- Timer
- Updater
- Thread*
- Trigger
- Task
- Job
- Function
- Watcher
Note: Both a process and queue act like the multi namespace, but allows for some cool things. Because they use the other objects an example on them will be done last
*Uses the built in coroutine features of lua, these have an interesting interaction with the other means of multi-tasking
Triggers are kind of useless after the creation of the Connection
Watchers have no real purpose as well I made it just because.
Examples of each object being used
We already showed alarms in action so lets move on to a Loop object
Throughout these examples I am going to do some strange things in order to show other features of the library!
LOOPS
Examples of each object being used
We already showed alarms in action so let’s move on to a Loop object
Throughout these examples I am going to do some strange things to show other features of the library!
LOOPS
-- Loops: Have been moved to the core of the library require("multi") would work as well
require("multi") -- gets the entire library
count=0
-loop=multi:newLoop(function(self,dt) -- dt is delta time and self is a reference to itself
+loop=multi:newLoop(function(self,dt) -- dt is delta time and self are a reference to itself
count=count+1
if count > 10 then
- self:Break() -- All methods on the multi objects are upper camel case, where as methods on the multi or process/queuer namespace are lower camel case
+ self:Break() -- All methods on the multi objects are upper camel case, whereas methods on the multi or process/queuer namespace are lower camel case
-- self:Break() will stop the loop and trigger the OnBreak(func) method
-- Stopping is the act of Pausing and deactivating the object! All objects can have the multiobj:Break() command on it!
else
@@ -240,11 +240,11 @@ loop:OnBreak(functionprint("You broke me :(")
end)
multi:mainloop()
-
Output
Loop #1!
Loop #2!
Loop #3!
Loop #4!
Loop #5!
Loop #6!
Loop #7!
Loop #8!
Loop #9!
Loop #10!
You broke me :(
With loops out of the way lets go down the line
This library aims to be Async like. In reality everything is still on one thread unless you are using the lanes integration module WIP (More on that later)
EVENTS
OutputLoop #1!
Loop #2!
Loop #3!
Loop #4!
Loop #5!
Loop #6!
Loop #7!
Loop #8!
Loop #9!
Loop #10!
You broke me :(
With loops out of the way lets go down the line
This library aims to be Async like. Everything is still on one thread unless you are using the lanes integration module WIP (A stable WIP, more on that later)
EVENTS
-- Events, these were the first objects introduced into the library. I seldomly use them in their pure form though, but later on you'll see their advance uses!
--- Events on there own don't really do much... We are going to need 2 objects at least to get something going
+
">-- Events, these were the first objects introduced into the library. I seldomly use them in their pure form though, but later you'll see their advance uses!
+-- Events on their own don't really do much... We are going to need 2 objects at least to get something going
require("multi") -- gets the entire library
count=0
--- lets use the loop again to add to count!
+-- let’s use the loop again to add to count!
loop=multi:newLoop(function(self,dt)
count=count+1
end)
@@ -269,7 +269,7 @@ event:OnEvent(functionend) -- events like alarms need to be reset the Reset() command works here as well
multi:mainloop()
Output
Stopped that loop!
STEPS
require("multi")
--- Steps, are like for loops but non blocking... You can run a loop to infintity and everything will still run I will combine Steps with Ranges in this example.
+-- Steps, are like for loops but non-blocking... You can run a loop to infinity and everything will still run I will combine Steps with Ranges in this example.
step1=multi:newStep(1,10,1,0) -- Some explaining is due. Argument 1 is the Start # Argument 2 is the ResetAt # (inclusive) Argument 3 is the count # (in our case we are counting by +1, this can be -1 but you need to adjust your start and resetAt numbers)
-- The 4th Argument is for skipping. This is useful for timing and for basic priority management. A priority management system is included!
step2=multi:newStep(10,1,-1,1) -- a second step, notice the slight changes!
@@ -321,7 +321,7 @@ step1:OnStart(functionprint("Step Started!")
end)
step1:OnStep(function(self,pos)
- if pos<=10 then -- what what is this? the step only goes to 10!!!
+ if pos<=10 then -- The step only goes to 10
print("Stepping... "..pos)
else
print("How did I get here?")
@@ -329,38 +329,38 @@ step1:OnStep(functionend)
step1:OnEnd(function(self)
print("Done!")
- -- We finished here, but I feel like we could have reused this step in some way... Yeah I soule Reset() it, but what if i wanted to change it...
+ -- We finished here, but I feel like we could have reused this step in some way... I could use Reset() , but what if I wanted to change it...
if self.endAt==10 then -- lets only loop once
self:Update(1,11,1,0) -- oh now we can reach that else condition!
end
-- Note Update() will restart the step!
end)
--- step2 is bored lets give it some love :P
+-- step2 is bored let’s give it some love :P
step2.range=step2:newRange() -- Set up a range object to have a nested step in a sense! Each nest requires a new range
-- it is in your interest not to share ranges between objects! You can however do it if it suits your needs though
step2:OnStep(function(self,pos)
-- for 1=1,math.huge do
- -- print("Haha I am holding the code up because I can!!!")
+ -- print("I am holding the code up because I can!")
--end
- -- We dont want to hold things up, but we want to nest.
- -- Note a range is not nessary if the nested for loop has a small range, if however the range is rather large you may want to allow other objects to do some work
+ -- We don’t want to hold things up, but we want to nest.
+ -- Note a range is not necessary if the nested for loop has a small range, if however, the range is rather large you may want to allow other objects to do some work
for i in self.range(1,100) do
- print(pos,i) -- Now our nested for loop is using a range object which allows for other objects to get some cpu time while this one is running
+ print(pos,i) -- Now our nested for loop is using a range object which allows for other objects to get some CPU time while this one is running
end
end)
--- TSteps are just like alarms and steps mixed together, the only difference in construction is the 4th Argument. On a TStep that argument controls time. The defualt is 1
+-- TSteps are just like alarms and steps mixed together, the only difference in construction is the 4th Argument. On a TStep that argument controls time. The default is 1
-- The Reset(n) works just like you would figure!
step3=multi:newTStep(1,10,.5,2) -- lets go from 1 to 10 counting by .5 every 2 seconds
step3:OnStep(function(self,pos)
print("Ok "..pos.."!")
end)
multi:mainloop()
-
Output
Note: the output on this one is huge!!! So I had to … some parts! You need to run this for your self to see what is going on!
Step Started!
Stepping… 1
10 1
Stepping… 2
10 2
Stepping… 3
10 3
…
Ok 9.5!
Ok 10!
TLOOPS
OutputNote: the output on this one is huge!!! So, I had to … some parts! You need to run this for yourself to see what is going on!
Step Started!
Stepping… 1
10 1
Stepping… 2
10 2
Stepping… 3
10 3
…
Ok 9.5!
Ok 10!
TLOOPS
require("multi")
-- TLoops are loops that run ever n second. We will also look at condition objects as well
-- Here we are going to modify the old loop to be a little different
count=0
-loop=multi:newTLoop(function(self) -- We are only going to coult with this loop, but doing so using a condition!
+loop=multi:newTLoop(function(self) -- We are only going to count with this loop but doing so using a condition!
while self:condition(self.cond) do
count=count+1
end
@@ -382,25 +382,25 @@ loop=multi:newTLoop(funct
self:Destroy() -- Lets destroy this object, casting it to the dark abyss MUHAHAHA!!!
-- the reference to this object will be a phantom object that does nothing!
end,1) -- Notice the ',1' after the function! This is where you put your time value!
-loop.cond=multi:newCondition(function() return count<=100 end) -- conditions need a bit of work before i am happy with them
+loop.cond=multi:newCondition(function() return count<=100 end) -- conditions need a bit of work before I am happy with them
multi:mainloop()
Output
Count is 101!
Connections
These are my favorite objects and you’ll see why. They are very useful objects for ASync connections!
require("multi")
--- Lets create the events
+-- Let’s create the events
yawn={} -- ill just leave that there
-OnCustomSafeEvent=multi:newConnection(true) -- lets pcall the calls incase something goes wrong defualt
-OnCustomEvent=multi:newConnection(false) -- lets not pcall the calls and let errors happen... We are good at coding though so lets get a speed advantage by not pcalling. Pcalling is useful for plugins and stuff that may have been coded badly and you can ingore those connections if need be.
+OnCustomSafeEvent=multi:newConnection(true) -- lets pcall the calls in case something goes wrong default
+OnCustomEvent=multi:newConnection(false) -- let’s not pcall the calls and let errors happen... We are good at coding though so let’s get a speed advantage by not pcalling. Pcalling is useful for plugins and stuff that may have been coded badly and you can ignore those connections if need be.
OnCustomEvent:Bind(yawn) -- create the connection lookup data in yawn
--- Lets connect to them, a recent update adds a nice syntax to connect to these
+-- Let’s connect to them, a recent update adds a nice syntax to connect to these
cd1=OnCustomSafeEvent:Connect(function(arg1,arg2,...)
print("CSE1",arg1,arg2,...)
-end,"bob") -- lets give this connection a name
+end,"bob") -- let’s give this connection a name
cd2=OnCustomSafeEvent:Connect(function(arg1,arg2,...)
print("CSE2",arg1,arg2,...)
-end,"joe") -- lets give this connection a name
+end,"joe") -- let’s give this connection a name
cd3=OnCustomSafeEvent:Connect(function(arg1,arg2,...)
print("CSE3",arg1,arg2,...)
-end) -- lets not give this connection a name
+end) -- let’s not give this connection a name
-- no need for connect, but I kept that function because of backwards compatibility.
OnCustomEvent(function(arg1,arg2,...)
@@ -449,7 +449,7 @@ OnCustomEvent(functionend)
-- Now within some loop/other object you trigger the connection like
-OnCustomEvent:Fire(1,2,"Hello!!!") -- fire all conections
+OnCustomEvent:Fire(1,2,"Hello!!!") -- fire all connections
-- You may have noticed that some events have names! See the following example!
OnCustomSafeEvent:getConnection("bob"):Fire(1,100,"Bye!") -- fire only bob!
@@ -466,7 +466,7 @@ OnCustomSafeEvent:Fire(1,print("------")
OnCustomSafeEvent:Fire(1,100,"Hi Ya Folks!!!") -- fire them all again!!!
-
Output
1 2 Hello!!!
CSE1 1 100 Bye!
CSE2 1 100 Hello!
CSE1 1 100 Hi Ya Folks!!!
CSE2 1 100 Hi Ya Folks!!!
CSE3 1 100 Hi Ya Folks!!!
CSE2 1 100 Hi Ya Folks!!!
CSE3 1 100 Hi Ya Folks!!!
You may think timers should be bundled with alarms, but they are a bit different and have cool features
TIMERS
Output1 2 Hello!!!
CSE1 1 100 Bye!
CSE2 1 100 Hello!
CSE1 1 100 Hi Ya Folks!!!
CSE2 1 100 Hi Ya Folks!!!
CSE3 1 100 Hi Ya Folks!!!
CSE2 1 100 Hi Ya Folks!!!
CSE3 1 100 Hi Ya Folks!!!
You may think timers should be bundled with alarms, but they are a bit different and have cool features
TIMERS
-- You see the thing is that all time based objects use timers eg. Alarms, TSteps, and Loops. Timers are more low level!
+
">-- You see the thing is that all time-based objects use timers e.g. Alarms, TSteps, and Loops. Timers are more low level!
require("multi")
local clock = os.clock
function sleep(n) -- seconds
@@ -509,7 +509,7 @@ print(timer:Get()) -- should be really close to the value of set + 2
timer=multi:newTimer()
timer:Start()
--- lets do a mock alarm
+-- let’s do a mock alarm
set=3 -- 3 seconds
a=0
while timer:Get()<=set do
@@ -532,19 +532,19 @@ sleep(3)
timer:Resume()
sleep(1)
print(timer:Get()) -- should be really close to the value of set + 2
-
Output
Note: This will make more sense when you run it for your self
3 second(s) have passed!
3.001
3.001
4.002
4.002
4.002
5.003
UPDATER
OutputNote: This will make more sense when you run it for yourself
3 second(s) have passed!
3.001
3.001
4.002
4.002
4.002
5.003
UPDATER
-- Updaters: Have been moved to the core of the library require("multi") would work as well
require("multi")
-updater=multi:newUpdater(5) -- really simple, think of a look with the skip feature of a step
+updater=multi:newUpdater(5) -- simple, think of a look with the skip feature of a step
updater:OnUpdate(function(self)
--print("updating...")
end)
-- Here every 5 steps the updater will do stuff!
--- But I feel it is now time to touch into priority management, so lets get into basic priority stuff and get into a more advance version of it
+-- But I feel it is now time to touch into priority management, so let’s get into basic priority stuff and get into a more advance version of it
--[[
multi.Priority_Core -- Highest form of priority
multi.Priority_High
multi.Priority_Above_Normal
-multi.Priority_Normal -- The defualt form of priority
+multi.Priority_Normal -- The default form of priority
multi.Priority_Below_Normal
multi.Priority_Low
multi.Priority_Idle -- Lowest form of priority
@@ -600,7 +600,7 @@ We aren't going to use regular objects to test priority, but rather benchmarks!
to set priority on an object though you would do
multiobj:setPriority(one of the above)
]]
--- lets bench for 3 seconds using the 3 forms of priority! First no Priority
+-- let’s bench for 3 seconds using the 3 forms of priority! First no Priority
multi:benchMark(3,nil,"Regular Bench: "):OnBench(function() -- the onbench() allows us to do each bench after each other!
print("P1\n---------------")
multi:enablePriority()
@@ -612,7 +612,7 @@ multi:benchMark(3,ni
multi:benchMark(3,multi.Priority_Low,"Low:")
multi:benchMark(3,multi.Priority_Idle,"Idle:"):OnBench(function()
print("P2\n---------------")
- -- Finally the 3rd form
+ -- Finally, the 3rd form
multi:enablePriority2()
multi:benchMark(3,multi.Priority_Core,"Core:")
multi:benchMark(3,multi.Priority_High,"High:")
@@ -624,13 +624,13 @@ multi:benchMark(3,ni
end)
end)
multi:mainloop() -- Notice how the past few examples did not need this, well only actors need to be in a loop! More on this in the wiki.
-
Output
Note: These numbers will vary drastically depending on your compiler and cpu power
Regular Bench: 2094137 Steps in 3 second(s)!
P1
Below_Normal: 236022 Steps in 3 second(s)!
Normal: 314697 Steps in 3 second(s)!
Above_Normal: 393372 Steps in 3 second(s)!
High: 472047 Steps in 3 second(s)!
Core: 550722 Steps in 3 second(s)!
Low: 157348 Steps in 3 second(s)!
Idle: 78674 Steps in 3 second(s)!
P2
Core: 994664 Steps in 3 second(s)!
High: 248666 Steps in 3 second(s)!
Above_Normal: 62166 Steps in 3 second(s)!
Normal: 15541 Steps in 3 second(s)!
Below_Normal: 3885 Steps in 3 second(s)!
Idle: 242 Steps in 3 second(s)!
Low: 971 Steps in 3 second(s)!
Notice: Even though I started each bench at the same time the order that they finished differed the order is likely to vary on your machine as well!
Processes
A process allows you to group the Actor objects within a controlable interface
OutputNote: These numbers will vary drastically depending on your compiler and CPU power
Regular Bench: 2094137 Steps in 3 second(s)!
P1
Below_Normal: 236022 Steps in 3 second(s)!
Normal: 314697 Steps in 3 second(s)!
Above_Normal: 393372 Steps in 3 second(s)!
High: 472047 Steps in 3 second(s)!
Core: 550722 Steps in 3 second(s)!
Low: 157348 Steps in 3 second(s)!
Idle: 78674 Steps in 3 second(s)!
P2
Core: 994664 Steps in 3 second(s)!
High: 248666 Steps in 3 second(s)!
Above_Normal: 62166 Steps in 3 second(s)!
Normal: 15541 Steps in 3 second(s)!
Below_Normal: 3885 Steps in 3 second(s)!
Idle: 242 Steps in 3 second(s)!
Low: 971 Steps in 3 second(s)!
Notice: Even though I started each bench at the same time the order that they finished differed the order is likely to vary on your machine as well!
Processes
A process allows you to group the Actor objects within a controllable interface
-- takes an optional file as
b=0
loop=proc:newTLoop(function(self)
a=a+1
- proc:Pause() -- pauses the cpu cycler for this processor! Individual objects are not paused, however because they aren't getting cpu time they act as if they were paused
+ proc:Pause() -- pauses the CPU cycler for this processor! Individual objects are not paused, however because they aren't getting CPU time they act as if they were paused
end,.1)
updater=proc:newUpdater(multi.Priority_Idle) -- priority can be used in skip arguments as well to manage priority without enabling it!
updater:OnUpdate(function(self)
@@ -670,14 +670,14 @@ updater:OnUpdate(function
end)
a=0 -- a counter
loop2=proc:newLoop(function(self,dt)
- print("Lets Go!")
+ print("Let’s Go!")
self:hold(3) -- this will keep this object from doing anything! Note: You can only have one hold active at a time! Multiple are possible, but results may not be as they seem see * for how hold works
- -- Within a process using hold will keep it alive until the hold is satisified!
+ -- Within a process using hold will keep it alive until the hold is satisfied!
print("Done being held for 1 second")
self:hold(function() return a>10 end)
print("A is now: "..a.." b is also: "..b)
self:Destroy()
- self.Parent:Pause() -- lets say you don't have the reference to the process!
+ self.Parent:Pause() -- let’s say you don't have the reference to the process!
os.exit()
end)
-- Notice this is now being created on the multi namespace
@@ -688,13 +688,13 @@ event:OnEvent(functionend)
proc:Start()
multi:mainloop()
-
Output
Lets Go!
Done being held for 1 second
A is now: 29 b is also: 479
Hold: This method works as follows
OutputLet’s Go!
Done being held for 1 second
A is now: 29 b is also: 479
Hold: This method works as follows
if type(task)=='number' then -- a sleep cmd
local timer=multi:newTimer()
timer:Start()
- while timer:Get()<task do -- This while loop is what makes using multiple holds tricky... If the outer while is good before the nested one then the outter one will have to wait! There is a way around this though!
+ while timer:Get()<task do -- This while loop is what makes using multiple holds tricky... If the outer while is good before the nested one then the outer one will have to wait! There is a way around this though!
if love then
self.Parent:lManager()
else
@@ -784,7 +784,7 @@ queue:newLoop(functionend)
queue:Start()
multi:mainloop()
-
Expected Output
Note: the queuer still does not work as expected!
Ring ring!!!
1
2
3
4
5
6
7
8
9
10
Done
Actual Output
Done
1
2
3
4
5
6
7
8
9
10
Ring ring!!!
Threads
These fix the hold problem that you get with regular objects, and they work exactly the same! They even have some extra features that make them really useful.
Expected OutputNote: the queuer still does not work as expected!
Ring ring!!!
1
2
3
4
5
6
7
8
9
10
Done
Actual Output
Done
1
2
3
4
5
6
7
8
9
10
Ring ring!!!
Threads
These fix the hold problem that you get with regular objects, and they work the same! They even have some extra features that make them really useful.
functionend)
multi:mainloop()
Output
Ring
0.992
0.992
Hello!
step 1
step 2
Hello!
Ring
2.092
step 3
Hello!
Ring
Count is 100
Threadable Actors
-- Alarms
- Events
- Loop/TLoop
- Process
- Step/TStep
Functions
If you ever wanted to pause a function then great now you can
The uses of the Function object allows one to have a method that can run free in a sense
FunctionsIf you ever wanted to pause a function then great now you can
The use of the Function object allows one to have a method that can run free in a sense
fun
end)
trig:Fire(1,2,3)
trig:Fire(1,2,3,"Hello",true)
-
Output
1 2 3
1 2 3 Hello true
Tasks
Tasks allow you to run a block of code before the multi mainloops does it thing. Tasks still have a use, but depending on how you program they aren’t needed.
Output1 2 3
1 2 3 Hello true
Tasks
Tasks allow you to run a block of code before the multi mainloop does it thing. Tasks still have a use but depending on how you program they aren’t needed.
functionprint("Hello there!")
end)
multi:mainloop()
-
Output
Hi!
Hello there!
Which came first the task or the loop?
As seen in the example above the tasks were done before anything else in the mainloop! This is useful when making libraries around the multitasking features and you need things to happen in a certain order!
Jobs
Jobs were a strange feature that was created for throttling connections! When I was building a irc bot around this library I couldn’t have messages posting too fast due to restrictions. Jobs allowed functions to be added to a queue that were executed after a certain amount of time has passed
OutputHi!
Hello there!
Which came first the task or the loop?
As seen in the example above the tasks were done before anything else in the mainloop! This is useful when making libraries around the multitasking features and you need things to happen in a certain order!
Jobs
Jobs were a strange feature that was created for throttling connections! When I was building an IRC bot around this library I couldn’t have messages posting too fast due to restrictions. Jobs allowed functions to be added to a queue that were executed after a certain amount of time has passed
function
Output
false 0
true 4
There are 4 jobs in the queue!
A job!
Another job!
Watchers
Watchers allow you to monitor a variable and trigger an event when the variable has changed!
require("multi")
a=0
-watcher=multi:newWatcher(_G,"a") -- watch a in the global enviroment
+watcher=multi:newWatcher(_G,"a") -- watch a in the global environment
watcher:OnValueChanged(function(self,old,new)
print(old,new)
end)
@@ -995,7 +995,7 @@ tloop=multi:newTLoop(func
a=a+1
end,1)
multi:mainloop()
-
Output
0 1
1 2
2 3
…
.inf-1 inf
Timeout management
Output0 1
1 2
2 3
…
.inf-1 inf
Timeout management
-- Note: I used a tloop so I could control the output of the program a bit.
+
">-- Note: I used a tloop, so I could control the output of the program a bit.
require("multi")
a=0
inc=1 -- change to 0 to see it not met at all, 1 if you want to see the first condition not met but the second and 2 if you want to see it meet the condition on the first go.
@@ -1049,7 +1049,7 @@ loop:OnTimedOut(function<
end
end)
multi:mainloop()
-
Output (Change the value inc as indicated in the comment to see the outcomes!)
Looping…
Looping…
Looping…
Looping…
Looping…
Looping…
Looping…
Looping…
Looping…
Loop timed out! tloop Trying again…
Looping…
Looping…
Looping…
Looping…
Looping…
We did it! 1 2 3
Rambling
5/23/18:
When it comes to running code across different systems we run into a problem. It takes time to send objects from one maching to another. In the beginning only local networks will be supported. I may add support to send commands to another network to do computing. Like having your own lus cloud. userdata will never be allowed to run on other machines. It is not possible unless the library you are using allows userdata to be turned into a string and back into an object. With this feature you want to send a command that will take time or needs tons of them done millions+, reason being networks are not that “fast” and only simple objects can be sent. If you mirror your enviroment then you can do some cool things.
The planned structure will be something like this:
multi-Single Threaded Multitasking
multi-Threads
multi-System Threads
multi-Network threads
where netThreads can contain systemThreads which can intern contain both Threads and single threaded multitasking
Nothing has been built yet, but the system will work something like this:
host:
Output (Change the value inc as indicated in the comment to see the outcomes!)Looping…
Looping…
Looping…
Looping…
Looping…
Looping…
Looping…
Looping…
Looping…
Loop timed out! tloop Trying again…
Looping…
Looping…
Looping…
Looping…
Looping…
We did it! 1 2 3
Rambling
5/23/18:
When it comes to running code across different systems we run into a problem. It takes time to send objects from one matching to another. In the beginning only, local networks will be supported. I may add support to send commands to another network to do computing. Like having your own lua cloud. userdata will never be allowed to run on other machines. It is not possible unless the library you are using allows userdata to be turned into a string and back into an object. With this feature you want to send a command that will take time or needs tons of them done millions+, reason being networks are not that “fast” and only simple objects can be sent. If you mirror your environment then you can do some cool things.
The planned structure will be something like this:
multi-Single Threaded Multitasking
multi-Threads
multi-System Threads
multi-Network threads
where netThreads can contain systemThreads which can intern contain both Threads and single threaded multitasking
Nothing has been built yet, but the system will work something like this:
host:
"NetThread_1",
multi:mainloop()
node
GLOBAL,sThread=require("multi.integration.networkManager").init() -- This will determine if one is using lanes,love2d, or luvit
node = multi:newNode("NodeName","MainSystem") -- Search the network for the host, connect to it and be ready for requests!
--- On the main thread, a simple multi:newNetworkThread thread and also non system threads, you can access global data without an issue. When dealing with system threads is when you have a problem.
+-- On the main thread, a simple multi:newNetworkThread thread and non-system threads, you can access global data without an issue. When dealing with system threads is when you have a problem.
node:setLog{
maxLines = 10000,
cleanOnInterval = true,
@@ -1093,13 +1093,13 @@ node:setLog{
noLog = false -- default is false, make true if you do not need a log
}
node:settings{
- maxJobs = 100, -- Job queues will respect this as well as the host when it is figuting out which node is under the least load. Default: 0 or infinite
+ maxJobs = 100, -- Job queues will respect this as well as the host when it is figuring out which node is under the least load. Default: 0 or infinite
sendLoadInterval = 60 -- every 60 seconds update the host of the nodes load
sendLoad = true -- default is true, tells the server how stressed the system is
}
multi:mainloop()
--- Note: the node will contain a log of all the commands that it gets. A file called "NodeName.log" will contain the info. You can set the limit by lines or file size. Also you can set it to clear the log every interval of time if an error does not exist. All errors are both logged and sent to the host as well. You can have more than one host and more than one node(duh :P).
-
The goal of the node is to set up a simple and easy way to run commands on a remote machine.
There are 2 main ways you can use this feature. 1. One node per machine with system threads being able to use the full processing power of the machine. 2. Multiple nodes on one machine where each node is acting like its own thread. And of course a mix of the two is indeed possible.
Love2d Sleeping reduces the cpu time making my load detection think the system is under more load, thus preventing it from sleeping… I will look into other means. As of right now it will not eat all of your cpu if threads are active. For now I suggest killing threads that aren’t needed anymore. On lanes threads at idle use 0% cpu and it is amazing. A state machine may solve what I need though. One state being idle state that sleeps and only goes into the active state if a job request or data is sent to it… after some time of not being under load it wil switch back into the idle state… We’ll see what happens.
Love2d doesn’t like to send functions through channels. By defualt it does not support this. I achieve this by dumping the function and loadstring it on the thread. This however is slow. For the System Threaded Job Queue I had to change my original idea of sending functions as jobs. The current way you do it now is register a job functions once and then call that job across the thread through a queue. Each worker thread pops from the queue and returns the job. The Job ID is automatically updated and allows you to keep track of the order that the data comes in. A table with # indexes can be used to originze the data…
In regards to benchmarking. If you see my bench marks and are wondering they are 10x better its because I am using luajit for my tests. I highly recommend using luajit for my library, but lua 5.1 will work just as well, but not as fast.
So while working on the jobQueue:doToAll() method I figured out why love2d’s threaded tables were acting up when more than 1 thread was sharing the table. It turns out 1 thread was eating all of the pops from the queue and starved all of the other queues… Ill need to use the same trick I did with GLOBAL to fix the problem… However at the rate I am going threading in love will become way slower. I might use the regualr GLOBAL to manage data internally for threadedtables…
It has been awhile since I had to bring out the Multi Functions… Syncing within threads are a pain! I had no idea what a task it would be to get something as simple as syncing data was going to be… I will probably add a SystemThreadedSyncer in the future because it will make life eaiser for you guys as well. SystemThreadedTables are still not going to work on love2d, but will work fine on lanes… I have a solution and it is being worked on… Depending on when I pust the next update to this library the second half of this ramble won’t apply anymore
I have been using this (EventManager —> MultiManager —> now multi) for my own purposes and started making this when I first started learning lua. You are able to see how the code changed and evolved throughout the years. I tried to include all the versions that still existed on my HDD.
I added my old versions to this library… It started out as the EventManager and was kinda crappy but it was the start to this library. It kept getting better and better until it became what it is today. There are some features that nolonger exist in the latest version, but they were remove because they were useless… I added these files to the github so for those interested can see into my mind in a sense and see how I developed the library before I used github.
The first version of the EventManager was function based not object based and benched at about 2000 steps per second… Yeah that was bad… I used loadstring and it was a mess… Take a look and see how it grew throughout the years I think it may intrest some of you guys!
+-- Note: the node will contain a log of all the commands that it gets. A file called "NodeName.log" will contain the info. You can set the limit by lines or file size. Also, you can set it to clear the log every interval of time if an error does not exist. All errors are both logged and sent to the host as well. You can have more than one host and more than one node(duh :P).
+
The goal of the node is to set up a simple and easy way to run commands on a remote machine.
There are 2 main ways you can use this feature. 1. One node per machine with system threads being able to use the full processing power of the machine. 2. Multiple nodes on one machine where each node is acting like its own thread. And of course, a mix of the two is indeed possible.
Love2d Sleeping reduces the CPU time making my load detection think the system is under more load, thus preventing it from sleeping… I will investigate other means. As of right now it will not eat all your CPU if threads are active. For now, I suggest killing threads that aren’t needed anymore. On lanes threads at idle use 0% CPU and it is amazing. A state machine may solve what I need though. One state being idle state that sleeps and only goes into the active state if a job request or data is sent to it… after some time of not being under load it will switch back into the idle state… We’ll see what happens.
Love2d doesn’t like to send functions through channels. By default, it does not support this. I achieve this by dumping the function and loadstring it on the thread. This however is slow. For the System Threaded Job Queue, I had to change my original idea of sending functions as jobs. The current way you do it now is register a job functions once and then call that job across the thread through a queue. Each worker thread pops from the queue and returns the job. The Job ID is automatically updated and allows you to keep track of the order that the data comes in. A table with # indexes can be used to organize the data…
Regarding benchmarking. If you see my bench marks and are wondering they are 10x better it’s because I am using luajit for my tests. I highly recommend using luajit for my library, but lua 5.1 will work just as well, but not as fast.
So, while working on the jobQueue:doToAll() method I figured out why love2d’s threaded tables were acting up when more than 1 thread was sharing the table. It turns out 1 thread was eating all the pops from the queue and starved all the other queues… I’ll need to use the same trick I did with GLOBAL to fix the problem… However, at the rate I am going threading in love will become way slower. I might use the regular GLOBAL to manage data internally for threadedtables…
It has been awhile since I had to bring out the Multi Functions… Syncing within threads are a pain! I had no idea what a task it would be to get something as simple as syncing data was going to be… I will probably add a SystemThreadedSyncer in the future because it will make life easier for you guys as well. SystemThreadedTables are still not going to work on love2d, but will work fine on lanes… I have a solution and it is being worked on… Fixed this :D. Depending on when I push the next update to this library the second half of this ramble won’t apply anymore
I have been using this (EventManager —> MultiManager —> now multi) for my own purposes and started making this when I first started learning lua. You can see how the code changed and evolved throughout the years. I tried to include all the versions that still existed on my HDD.
I added my old versions to this library… It started out as the EventManager and was kind of crappy, but it was the start to this library. It kept getting better and better until it became what it is today. There are some features that no longer exist in the latest version, but they were remove because they were useless… I added these files to the GitHub so for those interested can see into my mind in a sense and see how I developed the library before I used GitHub.
The first version of the EventManager was function based not object based and benched at about 2000 steps per second… Yeah that was bad… I used loadstring and it was a mess… Look and see how it grew throughout the years I think it may interest some of you guys!
diff --git a/README.md b/README.md
index dd2dfcf..7be5371 100644
--- a/README.md
+++ b/README.md
@@ -1,76 +1,45 @@
-# multi Version: 1.11.0 (Show me the love, love2d 11.1 support is here see changelog for details. Plus a new threaded object for testing!)
+# multi Version: 2.0.0 (Introducing Network Threads look at the changelog for what was added)
**NOTE: I have been studying a lot about threading for the past few months and have some awesome additions in store! They will take a while to come out though. The goal of the library is still to provide a simple and efficient way to multi task in lua**
-In Changes you'll find documentation for(In Order):
-- Sterilizing Objects
-- System Threaded Job Queues
-- New mainloop functions
-- System Threaded Tables
-- System Threaded Benchmark
-- System Threaded Queues
-- Threading related features
-- And backwards compat stuff
-
-My multitasking library for lua. It is a pure lua binding if you ingore the integrations and the love2d compat. If you find any bugs or have any issues please let me know :). **If you don't see a table of contents try using the ReadMe.html file. It is eaiser to navigate the readme**
+My multitasking library for lua. It is a pure lua binding if you ignore the integrations and the love2d compat. If you find any bugs or have any issues, please let me know :). **If you don't see a table of contents try using the ReadMe.html file. It is easier to navigate the readme**
[TOC]
INSTALLING
----------
-Note: The latest version of lualanes is required if you want to make use of system threads on lua 5.1+. I will update the dependencies for luarocks since this library should work fine on lua 5.1+
+Note: The latest version of Lua lanes is required if you want to make use of system threads on lua 5.1+. I will update the dependencies for Lua rocks since this library should work fine on lua 5.1+
-To install copy the multi folder into your enviroment and you are good to go
-If you want to use the system threads then you'll need to install lanes!
+To install copy the multi folder into your environment and you are good to go
+If you want to use the system threads, then you'll need to install lanes!
**or** use luarocks
```
-luarocks install bin -- Inorder to use the new save state stuff
+luarocks install bin -- To use the new save state stuff
luarocks install multi
```
-Note: In the near future you may be able to run multitasking code on multiple machines, network paralisim. This however will have to wait until I hammer out some bugs within the core of system threading itself.
+Note: Soon you may be able to run multitasking code on multiple machines, network parallelism. This however will have to wait until I hammer out some bugs within the core of system threading itself.
See the rambling section to get an idea of how this will work.
Discord
-------
-For real-time assistance with my libraries! A place where you can ask questions and get help with any of my libraries. Also you can request features and stuff there as well.
+For real-time assistance with my libraries! A place where you can ask questions and get help with any of my libraries. Also, you can request features and stuff there as well.
https://discord.gg/U8UspuA
-**Upcoming Plans:** Adding network support for threading. Kinda like your own lua cloud. This will require the bin, net, and multi library. Once that happens I will include those libraries as a set. This also means that you can expect both a stand alone and joined versions of the libraries.
+**Upcoming Plans:** Adding network support for threading. Kind of like your own lua cloud. This will require the bin, net, and multi library. Once that happens I will include those libraries as a set. This also means that you can expect both a standalone and joined versions of the libraries.
Planned features/TODO
---------------------
-- [x] ~~Add system threads for love2d that works like the lanesManager (loveManager, slight differences).~~
-- [x] ~~Improve performance of the library~~
-- [x] ~~Improve coroutine based threading scheduling~~
-- [ ] Improve love2d Idle thread cpu usage/Fix the performance when using system threads in love2d... Tricky Look at the rambling section for insight.
-- [x] ~~Add more control to coroutine based threading~~
-- [ ] Add more control to system based threading
- [ ] Make practical examples that show how you can solve real problems
-- [x] ~~Add more features to support module creators~~
-- [x] ~~Make a framework for eaiser thread task distributing~~
-- [x] ~~Fix Error handling on threaded multi objects~~ Non threaded multiobjs will crash your program if they error though! Use multi:newThread() of multi:newSystemThread() if your code can error! Unless you use multi:protect() this however lowers performance!
-- [x] ~~Add multi:OnError(function(obj,err))~~
-- [ ] sThread.wrap(obj) **May or may not be completed** Theory: Allows interaction in one thread to affect it in another. The addition to threaded tables may make this possible!
-- [ ] SystemThreaded Actors -- After some tests i figured out a way to make this work... It will work slightly different though. This is due to the actor needing to be splittable...
-- [ ] LoadBalancing for system threads (Once SystemThreaded Actors are done)
-- [x] ~~Add more integrations~~
-- [ ] Fix SystemThreadedTables
-- [ ] Finish the wiki stuff. (11% done)
-- [ ] Test for unknown bugs
+- [ ] Finish the wiki stuff. (11% done) -- It's been at 11% for so long. I really need to get on this!
+- [ ] Test for unknown bugs -- This is always going on
+- [x] ~~Network Parallelism~~
Known Bugs/Issues
-----------------
-~~In regards to integrations, thread cancellation works slightly different for love2d and lanes. Within love2d I was unable to (To lazy to...) not use the multi library within the thread. A fix for this is to call `multi:Stop()` when you are done with your threaded code! This may change however if I find a way to work around this. In love2d in order to mimic the GLOBAL table I needed the library to constantly sync tha data... You can use the sThread.waitFor(varname), or sThread.hold(func) methods to sync the globals, to get the value instead of using GLOBAL and this could work. If you want to go this route I suggest setting multi.isRunning=true to prevent the auto runner from doing its thing! This will make the multi manager no longer function, but thats the point :P~~ THREAD.kill() should do the trick from within the thread. A listener could be made to detect when thread kill has been requested and sent to the running thread.
-Another bug concerns the SystemThreadedJobQueue, Only 1 can be used for now. Going to change in a future update
-
-~~And systemThreadedTables only supports 1 table between the main and worker thread! They do not work when shared between 2 or more threads. If you need that much flexiblity ust the GLOBAL table that all threads have.~~ **FIXED**
-
-~~For module creators using this library. I suggest using SystemThreadedQueues for data transfer instead of SystemThreadedTables for rapid data transfer, If you plan on having Constants that will always be the same then a table is a good idea! They support up to **n** threads and can be messed with and abused as much as you want :D~~ FIXED Use what you want!
-
-~~Love2D SystemThreadedTAbles do not send love2d userdata, use queues instead for that!~~ **FIXED**
+A bug concerns the SystemThreadedJobQueue, only 1 can be used for now. Might change in a future update
Usage:
-----
@@ -84,12 +53,12 @@ alarm:OnRing(function(a)
end)
multi:mainloop() -- the main loop of the program, multi:umanager() exists as well to allow integration in other loops Ex: love2d love.update function. More on this binding in the wiki!
```
-The library is modular so you only need to require what you need to. Because of this, the global enviroment is altered
+The library is modular, so you only need to require what you need to. Because of this, the global environment is altered
There are many useful objects that you can use
Check out the wiki for detailed usage, but here are the objects:
- Process#
-- QueueQueuer#
+- Queue#
- Alarm
- Loop
- Event
@@ -107,15 +76,15 @@ Check out the wiki for detailed usage, but here are the objects:
- Job
- Function
- Watcher
-Note: *Both a process and queue act like the multi namespace, but allows for some cool things. Because they use the other objects an example on them will be done last*
+Note: *Both a process and queue act like the multi namespace but allows for some cool things. Because they use the other objects an example on them will be done last*
*Uses the built in coroutine features of lua, these have an interesting interaction with the other means of multi-tasking
Triggers are kind of useless after the creation of the Connection
Watchers have no real purpose as well I made it just because.
# Examples of each object being used
-We already showed alarms in action so lets move on to a Loop object
+We already showed alarms in action so let’s move on to a Loop object
-Throughout these examples I am going to do some strange things in order to show other features of the library!
+Throughout these examples I am going to do some strange things to show other features of the library!
LOOPS
-----
@@ -123,10 +92,10 @@ LOOPS
-- Loops: Have been moved to the core of the library require("multi") would work as well
require("multi") -- gets the entire library
count=0
-loop=multi:newLoop(function(self,dt) -- dt is delta time and self is a reference to itself
+loop=multi:newLoop(function(self,dt) -- dt is delta time and self are a reference to itself
count=count+1
if count > 10 then
- self:Break() -- All methods on the multi objects are upper camel case, where as methods on the multi or process/queuer namespace are lower camel case
+ self:Break() -- All methods on the multi objects are upper camel case, whereas methods on the multi or process/queuer namespace are lower camel case
-- self:Break() will stop the loop and trigger the OnBreak(func) method
-- Stopping is the act of Pausing and deactivating the object! All objects can have the multiobj:Break() command on it!
else
@@ -154,16 +123,16 @@ You broke me :(
With loops out of the way lets go down the line
-This library aims to be Async like. In reality everything is still on one thread *unless you are using the lanes integration module WIP* (More on that later)
+This library aims to be Async like. Everything is still on one thread *unless you are using the lanes integration module WIP* (A stable WIP, more on that later)
EVENTS
------
```lua
--- Events, these were the first objects introduced into the library. I seldomly use them in their pure form though, but later on you'll see their advance uses!
--- Events on there own don't really do much... We are going to need 2 objects at least to get something going
+-- Events, these were the first objects introduced into the library. I seldomly use them in their pure form though, but later you'll see their advance uses!
+-- Events on their own don't really do much... We are going to need 2 objects at least to get something going
require("multi") -- gets the entire library
count=0
--- lets use the loop again to add to count!
+-- let’s use the loop again to add to count!
loop=multi:newLoop(function(self,dt)
count=count+1
end)
@@ -181,7 +150,7 @@ STEPS
-----
```lua
require("multi")
--- Steps, are like for loops but non blocking... You can run a loop to infintity and everything will still run I will combine Steps with Ranges in this example.
+-- Steps, are like for loops but non-blocking... You can run a loop to infinity and everything will still run I will combine Steps with Ranges in this example.
step1=multi:newStep(1,10,1,0) -- Some explaining is due. Argument 1 is the Start # Argument 2 is the ResetAt # (inclusive) Argument 3 is the count # (in our case we are counting by +1, this can be -1 but you need to adjust your start and resetAt numbers)
-- The 4th Argument is for skipping. This is useful for timing and for basic priority management. A priority management system is included!
step2=multi:newStep(10,1,-1,1) -- a second step, notice the slight changes!
@@ -189,7 +158,7 @@ step1:OnStart(function(self)
print("Step Started!")
end)
step1:OnStep(function(self,pos)
- if pos<=10 then -- what what is this? the step only goes to 10!!!
+ if pos<=10 then -- The step only goes to 10
print("Stepping... "..pos)
else
print("How did I get here?")
@@ -197,27 +166,27 @@ step1:OnStep(function(self,pos)
end)
step1:OnEnd(function(self)
print("Done!")
- -- We finished here, but I feel like we could have reused this step in some way... Yeah I soule Reset() it, but what if i wanted to change it...
+ -- We finished here, but I feel like we could have reused this step in some way... I could use Reset() , but what if I wanted to change it...
if self.endAt==10 then -- lets only loop once
self:Update(1,11,1,0) -- oh now we can reach that else condition!
end
-- Note Update() will restart the step!
end)
--- step2 is bored lets give it some love :P
+-- step2 is bored let’s give it some love :P
step2.range=step2:newRange() -- Set up a range object to have a nested step in a sense! Each nest requires a new range
-- it is in your interest not to share ranges between objects! You can however do it if it suits your needs though
step2:OnStep(function(self,pos)
-- for 1=1,math.huge do
- -- print("Haha I am holding the code up because I can!!!")
+ -- print("I am holding the code up because I can!")
--end
- -- We dont want to hold things up, but we want to nest.
- -- Note a range is not nessary if the nested for loop has a small range, if however the range is rather large you may want to allow other objects to do some work
+ -- We don’t want to hold things up, but we want to nest.
+ -- Note a range is not necessary if the nested for loop has a small range, if however, the range is rather large you may want to allow other objects to do some work
for i in self.range(1,100) do
- print(pos,i) -- Now our nested for loop is using a range object which allows for other objects to get some cpu time while this one is running
+ print(pos,i) -- Now our nested for loop is using a range object which allows for other objects to get some CPU time while this one is running
end
end)
--- TSteps are just like alarms and steps mixed together, the only difference in construction is the 4th Argument. On a TStep that argument controls time. The defualt is 1
+-- TSteps are just like alarms and steps mixed together, the only difference in construction is the 4th Argument. On a TStep that argument controls time. The default is 1
-- The Reset(n) works just like you would figure!
step3=multi:newTStep(1,10,.5,2) -- lets go from 1 to 10 counting by .5 every 2 seconds
step3:OnStep(function(self,pos)
@@ -227,7 +196,7 @@ multi:mainloop()
```
# Output
-Note: the output on this one is huge!!! So I had to ... some parts! You need to run this for your self to see what is going on!
+Note: the output on this one is huge!!! So, I had to ... some parts! You need to run this for yourself to see what is going on!
Step Started!
Stepping... 1
10 1
@@ -246,7 +215,7 @@ require("multi")
-- TLoops are loops that run ever n second. We will also look at condition objects as well
-- Here we are going to modify the old loop to be a little different
count=0
-loop=multi:newTLoop(function(self) -- We are only going to coult with this loop, but doing so using a condition!
+loop=multi:newTLoop(function(self) -- We are only going to count with this loop but doing so using a condition!
while self:condition(self.cond) do
count=count+1
end
@@ -254,7 +223,7 @@ loop=multi:newTLoop(function(self) -- We are only going to coult with this loop,
self:Destroy() -- Lets destroy this object, casting it to the dark abyss MUHAHAHA!!!
-- the reference to this object will be a phantom object that does nothing!
end,1) -- Notice the ',1' after the function! This is where you put your time value!
-loop.cond=multi:newCondition(function() return count<=100 end) -- conditions need a bit of work before i am happy with them
+loop.cond=multi:newCondition(function() return count<=100 end) -- conditions need a bit of work before I am happy with them
multi:mainloop()
```
# Output
@@ -266,22 +235,22 @@ These are my favorite objects and you'll see why. They are very useful objects f
```lua
require("multi")
--- Lets create the events
+-- Let’s create the events
yawn={} -- ill just leave that there
-OnCustomSafeEvent=multi:newConnection(true) -- lets pcall the calls incase something goes wrong defualt
-OnCustomEvent=multi:newConnection(false) -- lets not pcall the calls and let errors happen... We are good at coding though so lets get a speed advantage by not pcalling. Pcalling is useful for plugins and stuff that may have been coded badly and you can ingore those connections if need be.
+OnCustomSafeEvent=multi:newConnection(true) -- lets pcall the calls in case something goes wrong default
+OnCustomEvent=multi:newConnection(false) -- let’s not pcall the calls and let errors happen... We are good at coding though so let’s get a speed advantage by not pcalling. Pcalling is useful for plugins and stuff that may have been coded badly and you can ignore those connections if need be.
OnCustomEvent:Bind(yawn) -- create the connection lookup data in yawn
--- Lets connect to them, a recent update adds a nice syntax to connect to these
+-- Let’s connect to them, a recent update adds a nice syntax to connect to these
cd1=OnCustomSafeEvent:Connect(function(arg1,arg2,...)
print("CSE1",arg1,arg2,...)
-end,"bob") -- lets give this connection a name
+end,"bob") -- let’s give this connection a name
cd2=OnCustomSafeEvent:Connect(function(arg1,arg2,...)
print("CSE2",arg1,arg2,...)
-end,"joe") -- lets give this connection a name
+end,"joe") -- let’s give this connection a name
cd3=OnCustomSafeEvent:Connect(function(arg1,arg2,...)
print("CSE3",arg1,arg2,...)
-end) -- lets not give this connection a name
+end) -- let’s not give this connection a name
-- no need for connect, but I kept that function because of backwards compatibility.
OnCustomEvent(function(arg1,arg2,...)
@@ -289,7 +258,7 @@ OnCustomEvent(function(arg1,arg2,...)
end)
-- Now within some loop/other object you trigger the connection like
-OnCustomEvent:Fire(1,2,"Hello!!!") -- fire all conections
+OnCustomEvent:Fire(1,2,"Hello!!!") -- fire all connections
-- You may have noticed that some events have names! See the following example!
OnCustomSafeEvent:getConnection("bob"):Fire(1,100,"Bye!") -- fire only bob!
@@ -322,7 +291,7 @@ You may think timers should be bundled with alarms, but they are a bit different
TIMERS
------
```lua
--- You see the thing is that all time based objects use timers eg. Alarms, TSteps, and Loops. Timers are more low level!
+-- You see the thing is that all time-based objects use timers e.g. Alarms, TSteps, and Loops. Timers are more low level!
require("multi")
local clock = os.clock
function sleep(n) -- seconds
@@ -332,7 +301,7 @@ end -- we will use this later!
timer=multi:newTimer()
timer:Start()
--- lets do a mock alarm
+-- let’s do a mock alarm
set=3 -- 3 seconds
a=0
while timer:Get()<=set do
@@ -357,7 +326,7 @@ sleep(1)
print(timer:Get()) -- should be really close to the value of set + 2
```
# Output
-Note: This will make more sense when you run it for your self
+Note: This will make more sense when you run it for yourself
3 second(s) have passed!
3.001
3.001
@@ -371,17 +340,17 @@ UPDATER
```lua
-- Updaters: Have been moved to the core of the library require("multi") would work as well
require("multi")
-updater=multi:newUpdater(5) -- really simple, think of a look with the skip feature of a step
+updater=multi:newUpdater(5) -- simple, think of a look with the skip feature of a step
updater:OnUpdate(function(self)
--print("updating...")
end)
-- Here every 5 steps the updater will do stuff!
--- But I feel it is now time to touch into priority management, so lets get into basic priority stuff and get into a more advance version of it
+-- But I feel it is now time to touch into priority management, so let’s get into basic priority stuff and get into a more advance version of it
--[[
multi.Priority_Core -- Highest form of priority
multi.Priority_High
multi.Priority_Above_Normal
-multi.Priority_Normal -- The defualt form of priority
+multi.Priority_Normal -- The default form of priority
multi.Priority_Below_Normal
multi.Priority_Low
multi.Priority_Idle -- Lowest form of priority
@@ -391,7 +360,7 @@ We aren't going to use regular objects to test priority, but rather benchmarks!
to set priority on an object though you would do
multiobj:setPriority(one of the above)
]]
--- lets bench for 3 seconds using the 3 forms of priority! First no Priority
+-- let’s bench for 3 seconds using the 3 forms of priority! First no Priority
multi:benchMark(3,nil,"Regular Bench: "):OnBench(function() -- the onbench() allows us to do each bench after each other!
print("P1\n---------------")
multi:enablePriority()
@@ -403,7 +372,7 @@ multi:benchMark(3,nil,"Regular Bench: "):OnBench(function() -- the onbench() all
multi:benchMark(3,multi.Priority_Low,"Low:")
multi:benchMark(3,multi.Priority_Idle,"Idle:"):OnBench(function()
print("P2\n---------------")
- -- Finally the 3rd form
+ -- Finally, the 3rd form
multi:enablePriority2()
multi:benchMark(3,multi.Priority_Core,"Core:")
multi:benchMark(3,multi.Priority_High,"High:")
@@ -417,7 +386,7 @@ end)
multi:mainloop() -- Notice how the past few examples did not need this, well only actors need to be in a loop! More on this in the wiki.
```
# Output
-Note: These numbers will vary drastically depending on your compiler and cpu power
+Note: These numbers will vary drastically depending on your compiler and CPU power
Regular Bench: 2094137 Steps in 3 second(s)!
P1
Below_Normal: 236022 Steps in 3 second(s)!
@@ -440,7 +409,7 @@ Notice: Even though I started each bench at the same time the order that they fi
Processes
---------
-A process allows you to group the Actor objects within a controlable interface
+A process allows you to group the Actor objects within a controllable interface
```lua
require("multi")
proc=multi:newProcess() -- takes an optional file as an argument, but for this example we aren't going to use that
@@ -448,7 +417,7 @@ proc=multi:newProcess() -- takes an optional file as an argument, but for this e
b=0
loop=proc:newTLoop(function(self)
a=a+1
- proc:Pause() -- pauses the cpu cycler for this processor! Individual objects are not paused, however because they aren't getting cpu time they act as if they were paused
+ proc:Pause() -- pauses the CPU cycler for this processor! Individual objects are not paused, however because they aren't getting CPU time they act as if they were paused
end,.1)
updater=proc:newUpdater(multi.Priority_Idle) -- priority can be used in skip arguments as well to manage priority without enabling it!
updater:OnUpdate(function(self)
@@ -456,14 +425,14 @@ updater:OnUpdate(function(self)
end)
a=0 -- a counter
loop2=proc:newLoop(function(self,dt)
- print("Lets Go!")
+ print("Let’s Go!")
self:hold(3) -- this will keep this object from doing anything! Note: You can only have one hold active at a time! Multiple are possible, but results may not be as they seem see * for how hold works
- -- Within a process using hold will keep it alive until the hold is satisified!
+ -- Within a process using hold will keep it alive until the hold is satisfied!
print("Done being held for 1 second")
self:hold(function() return a>10 end)
print("A is now: "..a.." b is also: "..b)
self:Destroy()
- self.Parent:Pause() -- lets say you don't have the reference to the process!
+ self.Parent:Pause() -- let’s say you don't have the reference to the process!
os.exit()
end)
-- Notice this is now being created on the multi namespace
@@ -476,7 +445,7 @@ proc:Start()
multi:mainloop()
```
# Output
-Lets Go!
+Let’s Go!
Done being held for 1 second
A is now: 29 b is also: 479
@@ -488,7 +457,7 @@ function multi:hold(task)
if type(task)=='number' then -- a sleep cmd
local timer=multi:newTimer()
timer:Start()
- while timer:Get()
Threads
-------
-These fix the hold problem that you get with regular objects, and they work exactly the same! They even have some extra features that make them really useful.
+These fix the hold problem that you get with regular objects, and they work the same! They even have some extra features that make them really useful.
```lua
require("multi")
test=multi:newThreadedProcess("main") -- you can thread processors and all Actors see note for a list of actors you can thread!
@@ -633,7 +602,7 @@ Threadable Actors
Functions
---------
If you ever wanted to pause a function then great now you can
-The uses of the Function object allows one to have a method that can run free in a sense
+The use of the Function object allows one to have a method that can run free in a sense
```lua
require("multi")
func=multi:newFunction(function(self,arg1,arg2,...)
@@ -687,7 +656,7 @@ trig:Fire(1,2,3,"Hello",true)
Tasks
-----
-Tasks allow you to run a block of code before the multi mainloops does it thing. Tasks still have a use, but depending on how you program they aren't needed.
+Tasks allow you to run a block of code before the multi mainloop does it thing. Tasks still have a use but depending on how you program they aren't needed.
```lua
require("multi")
multi:newTask(function()
@@ -711,7 +680,7 @@ As seen in the example above the tasks were done before anything else in the mai
Jobs
----
-Jobs were a strange feature that was created for throttling connections! When I was building a irc bot around this library I couldn't have messages posting too fast due to restrictions. Jobs allowed functions to be added to a queue that were executed after a certain amount of time has passed
+Jobs were a strange feature that was created for throttling connections! When I was building an IRC bot around this library I couldn't have messages posting too fast due to restrictions. Jobs allowed functions to be added to a queue that were executed after a certain amount of time has passed
```lua
require("multi") -- jobs use alarms I am pondering if alarms should be added to the core or if jobs should use timers instead...
-- jobs are built into the core of the library so no need to require them
@@ -750,7 +719,7 @@ Watchers allow you to monitor a variable and trigger an event when the variable
```lua
require("multi")
a=0
-watcher=multi:newWatcher(_G,"a") -- watch a in the global enviroment
+watcher=multi:newWatcher(_G,"a") -- watch a in the global environment
watcher:OnValueChanged(function(self,old,new)
print(old,new)
end)
@@ -769,7 +738,7 @@ multi:mainloop()
Timeout management
------------------
```lua
--- Note: I used a tloop so I could control the output of the program a bit.
+-- Note: I used a tloop, so I could control the output of the program a bit.
require("multi")
a=0
inc=1 -- change to 0 to see it not met at all, 1 if you want to see the first condition not met but the second and 2 if you want to see it meet the condition on the first go.
@@ -818,7 +787,7 @@ We did it! 1 2 3
Rambling
--------
5/23/18:
-When it comes to running code across different systems we run into a problem. It takes time to send objects from one maching to another. In the beginning only local networks will be supported. I may add support to send commands to another network to do computing. Like having your own lus cloud. userdata will never be allowed to run on other machines. It is not possible unless the library you are using allows userdata to be turned into a string and back into an object. With this feature you want to send a command that will take time or needs tons of them done millions+, reason being networks are not that "fast" and only simple objects can be sent. If you mirror your enviroment then you can do some cool things.
+When it comes to running code across different systems we run into a problem. It takes time to send objects from one matching to another. In the beginning only, local networks will be supported. I may add support to send commands to another network to do computing. Like having your own lua cloud. userdata will never be allowed to run on other machines. It is not possible unless the library you are using allows userdata to be turned into a string and back into an object. With this feature you want to send a command that will take time or needs tons of them done millions+, reason being networks are not that "fast" and only simple objects can be sent. If you mirror your environment then you can do some cool things.
The planned structure will be something like this:
multi-Single Threaded Multitasking
@@ -845,7 +814,7 @@ multi:mainloop()
```lua
GLOBAL,sThread=require("multi.integration.networkManager").init() -- This will determine if one is using lanes,love2d, or luvit
node = multi:newNode("NodeName","MainSystem") -- Search the network for the host, connect to it and be ready for requests!
--- On the main thread, a simple multi:newNetworkThread thread and also non system threads, you can access global data without an issue. When dealing with system threads is when you have a problem.
+-- On the main thread, a simple multi:newNetworkThread thread and non-system threads, you can access global data without an issue. When dealing with system threads is when you have a problem.
node:setLog{
maxLines = 10000,
cleanOnInterval = true,
@@ -853,30 +822,28 @@ node:setLog{
noLog = false -- default is false, make true if you do not need a log
}
node:settings{
- maxJobs = 100, -- Job queues will respect this as well as the host when it is figuting out which node is under the least load. Default: 0 or infinite
+ maxJobs = 100, -- Job queues will respect this as well as the host when it is figuring out which node is under the least load. Default: 0 or infinite
sendLoadInterval = 60 -- every 60 seconds update the host of the nodes load
sendLoad = true -- default is true, tells the server how stressed the system is
}
multi:mainloop()
--- Note: the node will contain a log of all the commands that it gets. A file called "NodeName.log" will contain the info. You can set the limit by lines or file size. Also you can set it to clear the log every interval of time if an error does not exist. All errors are both logged and sent to the host as well. You can have more than one host and more than one node(duh :P).
+-- Note: the node will contain a log of all the commands that it gets. A file called "NodeName.log" will contain the info. You can set the limit by lines or file size. Also, you can set it to clear the log every interval of time if an error does not exist. All errors are both logged and sent to the host as well. You can have more than one host and more than one node(duh :P).
```
The goal of the node is to set up a simple and easy way to run commands on a remote machine.
-There are 2 main ways you can use this feature. 1. One node per machine with system threads being able to use the full processing power of the machine. 2. Multiple nodes on one machine where each node is acting like its own thread. And of course a mix of the two is indeed possible.
+There are 2 main ways you can use this feature. 1. One node per machine with system threads being able to use the full processing power of the machine. 2. Multiple nodes on one machine where each node is acting like its own thread. And of course, a mix of the two is indeed possible.
-Love2d Sleeping reduces the cpu time making my load detection think the system is under more load, thus preventing it from sleeping... I will look into other means. As of right now it will not eat all of your cpu if threads are active. For now I suggest killing threads that aren't needed anymore. On lanes threads at idle use 0% cpu and it is amazing. A state machine may solve what I need though. One state being idle state that sleeps and only goes into the active state if a job request or data is sent to it... after some time of not being under load it wil switch back into the idle state... We'll see what happens.
+Love2d Sleeping reduces the CPU time making my load detection think the system is under more load, thus preventing it from sleeping... I will investigate other means. As of right now it will not eat all your CPU if threads are active. For now, I suggest killing threads that aren't needed anymore. On lanes threads at idle use 0% CPU and it is amazing. A state machine may solve what I need though. One state being idle state that sleeps and only goes into the active state if a job request or data is sent to it... after some time of not being under load it will switch back into the idle state... We'll see what happens.
-Love2d doesn't like to send functions through channels. By defualt it does not support this. I achieve this by dumping the function and loadstring it on the thread. This however is slow. For the System Threaded Job Queue I had to change my original idea of sending functions as jobs. The current way you do it now is register a job functions once and then call that job across the thread through a queue. Each worker thread pops from the queue and returns the job. The Job ID is automatically updated and allows you to keep track of the order that the data comes in. A table with # indexes can be used to originze the data...
+Love2d doesn't like to send functions through channels. By default, it does not support this. I achieve this by dumping the function and loadstring it on the thread. This however is slow. For the System Threaded Job Queue, I had to change my original idea of sending functions as jobs. The current way you do it now is register a job functions once and then call that job across the thread through a queue. Each worker thread pops from the queue and returns the job. The Job ID is automatically updated and allows you to keep track of the order that the data comes in. A table with # indexes can be used to organize the data...
-In regards to benchmarking. If you see my bench marks and are wondering they are 10x better its because I am using luajit for my tests. I highly recommend using luajit for my library, but lua 5.1 will work just as well, but not as fast.
+Regarding benchmarking. If you see my bench marks and are wondering they are 10x better it’s because I am using luajit for my tests. I highly recommend using luajit for my library, but lua 5.1 will work just as well, but not as fast.
-So while working on the jobQueue:doToAll() method I figured out why love2d's threaded tables were acting up when more than 1 thread was sharing the table. It turns out 1 thread was eating all of the pops from the queue and starved all of the other queues... Ill need to use the same trick I did with GLOBAL to fix the problem... However at the rate I am going threading in love will become way slower. I might use the regualr GLOBAL to manage data internally for threadedtables...
+So, while working on the jobQueue:doToAll() method I figured out why love2d's threaded tables were acting up when more than 1 thread was sharing the table. It turns out 1 thread was eating all the pops from the queue and starved all the other queues... I’ll need to use the same trick I did with GLOBAL to fix the problem... However, at the rate I am going threading in love will become way slower. I might use the regular GLOBAL to manage data internally for threadedtables...
-It has been awhile since I had to bring out the Multi Functions... Syncing within threads are a pain! I had no idea what a task it would be to get something as simple as syncing data was going to be... I will probably add a SystemThreadedSyncer in the future because it will make life eaiser for you guys as well. SystemThreadedTables are still not going to work on love2d, but will work fine on lanes... I have a solution and it is being worked on... Depending on when I pust the next update to this library the second half of this ramble won't apply anymore
+I have been using this (EventManager --> MultiManager --> now multi) for my own purposes and started making this when I first started learning lua. You can see how the code changed and evolved throughout the years. I tried to include all the versions that still existed on my HDD.
-I have been using this (EventManager --> MultiManager --> now multi) for my own purposes and started making this when I first started learning lua. You are able to see how the code changed and evolved throughout the years. I tried to include all the versions that still existed on my HDD.
+I added my old versions to this library... It started out as the EventManager and was kind of crappy, but it was the start to this library. It kept getting better and better until it became what it is today. There are some features that no longer exist in the latest version, but they were remove because they were useless... I added these files to the GitHub so for those interested can see into my mind in a sense and see how I developed the library before I used GitHub.
-I added my old versions to this library... It started out as the EventManager and was kinda crappy but it was the start to this library. It kept getting better and better until it became what it is today. There are some features that nolonger exist in the latest version, but they were remove because they were useless... I added these files to the github so for those interested can see into my mind in a sense and see how I developed the library before I used github.
-
-The first version of the EventManager was function based not object based and benched at about 2000 steps per second... Yeah that was bad... I used loadstring and it was a mess... Take a look and see how it grew throughout the years I think it may intrest some of you guys!
\ No newline at end of file
+The first version of the EventManager was function based not object based and benched at about 2000 steps per second... Yeah that was bad... I used loadstring and it was a mess... Look and see how it grew throughout the years I think it may interest some of you guys!
diff --git a/changes.html b/changes.html
index 4aa4fb7..55c2752 100644
--- a/changes.html
+++ b/changes.html
@@ -9,15 +9,31 @@
-Changes
Update: 1.11.0
Added:
-- SystemThreadedConsole(name) — Allsow each thread to print without the sync issues that make prints merge and hard to read.
ChangesUpdate: 1.11.1
Love2d change:
I didn’t make a mistake but didn’t fully understand how the new love.run function worked.
So, it works by returning a function that allows for running the mainloop. So, this means that we can do something like this:
multi:newLoop(love.run()) -- Run the mainloop here, cannot use thread.* when using this object
+
+-- or
+
+multi:newThread("MainLoop",love.run()) -- allows you to use the thread.*
+
+--And you'll need to throw this in at the end
+multi:mainloop()
+
For the long-time users of this library you know of the amazing multitasking features that the library has. Used correctly you can have insane power. The priority management system should be quite useful with this change.
NOTE: multiobj:hold() will be removed in the next version! This is something I feel should be changed, since threads(coroutines) do the job great, and way better than my holding method that I throw together 5 years ago. I doubt this is being used by many anyway. Version 1.11.2 or version 2.0.0 will have this change. The next update may be either, bug fixes if any or network parallelism.
TODO: Add auto priority adjustments when working with priority and stuff… If the system is under heavy load it will dial some things deemed as less important down and raise the core processes.
Update: 1.11.0
Added:
+- SystemThreadedConsole(name) — Allow each thread to print without the sync issues that make prints merge and hard to read.
-- MainThread:
console = multi:newSystemThreadedConsole("console"):init()
-- Thread:
@@ -25,7 +41,7 @@ console = THREAD.waitFor("console"):init()
-- using the console
console:print(...)
-console:write(...) -- kinda useless for formatting code though. other threads can eaisly mess this up.
+console:write(...) -- kind of useless for formatting code though. other threads can eaisly mess this up.
Fixed/Updated:
- Love2d 11.1 support is now here! Will now require these lines in your main.lua file
function
end,1)
multi:mainloop()
Update: 1.9.2
Added:
-- (THREAD).kill() kills a thread. Note: THREAD is based on what you name it
- newTimeStamper() Part of the persistant systems… Useful for when you are running this library for a massive amount of time… like years stright!
Allows one to hook to timed events such as whenever the clock strikes midnight or when the day turns to monday. The event is only done once though. so as soon as monday is set it would trigger then not trigger again until next monday
works for seconds, minutes, days, months, year.or stamper:OnDay(<
stamper:OnMonth(int month,func)
stamper:OnYear(int year,func)
-Updated: - LoadBalancing, well bettwr load balancing than existed before. This one allowd for multiple processes to have their own load reading. Calling this on the multi object will return the total load for the entire multi enviroment… loads of other processes are indeed affected by what other processes are doing. However if you combine prorioty to the mix of things then you will get differing results… these results however will most likely be higher than normal… different pirorities will have different default thresholds of performence.
Fixed:
-- Thread.getName() should now work on lanes and love2d, haven’t tested ut nuch with the luvit side of things…
- A bug with the lovemanager table.remove arguments were backwards haha
- The queue object in the love2d threading has been fixed! It now supports sending all objects (even functions as long as no upvalues are present!)
Changed:
-- SystemThreadedJobQueues now have built in load management so they are not constantly at 100% cpu usage.
- SystemThreadedJobQueues pushJob now retunts an id of that job which will match the same one that OnJobCompleted returns
Update: 1.9.1
Added:
-- Integration “multi.integration.luvitManager”
- Limited… Only the basic multi:newSystemThread(…) will work
- Not even data passing will work other than arguments… If using the bin library you can pass tables and function… Even full objects as long as inner recursion is not preasent.
Updated:
-- multi:newSystemThread(name,func,…)
- It will not pass the … to the func(). Do not know why this wasn’t done in the first place :P
- Also multi:getPlatform(will now return “luvit” if using luvit… Though Idk if module creators would use the multi library when inside the luvit enviroment
Update: 1.9.0
Added:
-- multiobj:ToString() — returns a string repersenting the object
- multi:newFromString(str) — creates an object from a string
Works on threads and regular objects. Requires the latest bin library to work!
Update: 1.9.1Added:
+- Integration “multi.integration.luvitManager”
- Limited… Only the basic multi:newSystemThread(…) will work
- Not even data passing will work other than arguments… If using the bin library, you can pass tables and function… Even full objects if inner recursion is not present.
Updated:
+- multi:newSystemThread(name,func,…)
- It will not pass the … to the func(). Do not know why this wasn’t done in the first place
- Also multi:getPlatform(will now return “luvit” if using luvit… Though Idk if module creators would use the multi library when inside the luvit environment
Update: 1.9.0
Added:
+- multiobj:ToString() — returns a string representing the object
- multi:newFromString(str) — creates an object from a string
Works on threads and regular objects. Requires the latest bin library to work!
function
end,1)
multi:mainloop()
Update: 1.8.4
Added:
-- multi:newSystemThreadedJobQueue()
- Improved stability of the library
- Fixed a bug that made the benchmark and getload commands non-thread(coroutine) safe
- Tweaked the loveManager to help improve idle cpu usage
- Minor tweaks to the coroutine scheduling
Using multi:newSystemThreadedJobQueue()
First you need to create the object
This works the same way as love2d as it does with lanes… It is getting increasing harder to make both work the same way with speed in mind… Anyway…
Using multi:newSystemThreadedJobQueue()First you need to create the object
This works the same way as love2d as it does with lanes… It is getting harder to make both work the same way with speed in mind… Anyway…
-- Creating the object using lanes manager to show case this. Examples has the file for love2d
local GLOBAL,sThread=require("multi.integration.lanesManager").init()
-jQueue=multi:newSystemThreadedJobQueue(n) -- this internally creates System threads. By defualt it will use the # of processors on your system You can set this number though.
--- Only create 1 jobqueue! For now making more than 1 is buggy. You only really need one though. Just register new functions if you want 1 queue to do more. The one reason though is keeping track of jobIDs. I have an idea that I will roll out in the next update.
+jQueue=multi:newSystemThreadedJobQueue(n) -- this internally creates System threads. By default it will use the # of processors on your system You can set this number though.
+-- Only create 1 jobqueue! For now, making more than 1 is not supported. You only really need one though. Just register new functions if you want 1 queue to do more. The one reason though is keeping track of jobIDs. I have an idea that I will roll out in the ~~next update~~ eventually.
jQueue:registerJob("TEST_JOB",function(a,s)
math.randomseed(s)
-- We will push a random #
@@ -340,9 +356,9 @@ jQueue:registerJob("TEST_JOB2",print("Test Works!") -- this is called from the job since it is registered on the same queue
end)
tableOfOrder={} -- This is how we will keep order of our completed jobs. There is no guarantee that the order will be correct
-jQueue.OnJobCompleted(function(JOBID,n) -- whenever a job is completed you hook to the event that is called. This passes the JOBID folled by the returns of the job
+jQueue.OnJobCompleted(function(JOBID,n) -- whenever a job is completed you hook to the event that is called. This passes the JOBID filled by the returns of the job
-- JOBID is the completed job, starts at 1 and counts up by 1.
- -- Threads finish at different times so jobids may be passed out of order! Be sure to have a way to order them
+ -- Threads finish at different times so jobIDs may be passed out of order! Be sure to have a way to order them
tableOfOrder[JOBID]=n -- we order ours by putting them into a table
if #tableOfOrder==10 then
print("We got all of the pieces!")
@@ -354,10 +370,10 @@ jQueue.OnJobCompleted(fun
end
print("I pushed all of the jobs :)")
multi:mainloop() -- Start the main loop :D
-
Thats it from this version!
Update: 1.8.3
Added:
New Mainloop functions Below you can see the slight differences… Function overhead is not too bad in lua, but has a real difference. multi:mainloop() and multi:unprotectedMainloop() use the same algorithm yet the dedicated unprotected one is slightly faster due to having less function overhead.
-- multi:mainloop()* — Bench: 16830003 Steps in 3 second(s)!
- multi:protectedMainloop() — Bench: 16699308 Steps in 3 second(s)!
- multi:unprotectedMainloop() — Bench: 16976627 Steps in 3 second(s)!
- multi:prioritizedMainloop1() — Bench: 15007133 Steps in 3 second(s)!
- multi:prioritizedMainloop2() — Bench: 15526248 Steps in 3 second(s)!
* The OG mainloop function remains the same and old methods to achieve what we have with the new ones still exist
These new methods help by removing function overhead that is caused through the original mainloop function. The one downside is that you no longer have the flexiblity to change the processing during runtime.
However there is a work around! You can use processes to run multiobjs as well and use the other methods on them.
I may make a full comparison between each method and which is faster, but for now trust that the dedicated ones with less function overhead are infact faster. Not by much but still faster. :D
Update: 1.8.2
Added:
-- multi:newsystemThreadedTable(name) NOTE: Metatables are not supported in transfers. However there is a work around obj:init() that you see does this. Take a look in the multi/integration/shared/shared.lua files to see how I did it!
- Modified the GLOBAL metatable to sync before doing its tests
- multi._VERSION was multi.Version, felt it would be more consistant this way… I left the old way of getting the version just incase someone has used that way. It will eventually be gone. Also multi:getVersion() will do the job just as well and keep your code nice and update related bug free!
- Also everything that is included in the: multi/integration/shared/shared.lua (Which is loaded automatically) works in both lanes and love2d enviroments!
The threaded table is setup just like the threaded queue.
It provids GLOBAL like features without having to write to GLOBAL!
This is useful for module creators who want to keep their data private, but also use GLOBAL like coding.
It has a few features that makes it a bit better than plain ol GLOBAL (For now…)
(ThreadedTable - TT for short)
-- TT:waitFor(name)
- TT:sync()
- TT[“var”]=value
- print(TT[“var”])
we also have the “sync” method, this one was made for love2d because we do a syncing trick to get data in a table format. The lanes side has a sync method as well so no worries. Using indexing calls sync once and may grab your variable. This allows you to have the lanes indexing ‘like’ syntax when doing regular indexing in love2d side of the module. As of right now both sides work flawlessly! And this effect is now the GLOBAL as well
On GLOBALS sync is a internal method for keeping the GLOBAL table in order. You can still use sThread.waitFor(name) to wait for variables that may of may not yet exist!
Time for some examples:
Using multi:newSystemThreadedTable(name)
Update: 1.8.3Added:
New Mainloop functions Below you can see the slight differences… Function overhead is not too bad in lua but has a real difference. multi:mainloop() and multi:unprotectedMainloop() use the same algorithm yet the dedicated unprotected one is slightly faster due to having less function overhead.
+- multi:mainloop()* — Bench: 16830003 Steps in 3 second(s)!
- multi:protectedMainloop() — Bench: 16699308 Steps in 3 second(s)!
- multi:unprotectedMainloop() — Bench: 16976627 Steps in 3 second(s)!
- multi:prioritizedMainloop1() — Bench: 15007133 Steps in 3 second(s)!
- multi:prioritizedMainloop2() — Bench: 15526248 Steps in 3 second(s)!
* The OG mainloop function remains the same and old methods to achieve what we have with the new ones still exist
These new methods help by removing function overhead that is caused through the original mainloop function. The one downside is that you no longer have the flexibility to change the processing during runtime.
However there is a work around! You can use processes to run multiobjs as well and use the other methods on them.
I may make a full comparison between each method and which is faster, but for now trust that the dedicated ones with less function overhead are in fact faster. Not by much but still faster.
Update: 1.8.2
Added:
+- multi:newsystemThreadedTable(name) NOTE: Metatables are not supported in transfers. However there is a work around obj:init() does this. Look in the multi/integration/shared/shared.lua files to see how I did it!
- Modified the GLOBAL metatable to sync before doing its tests
- multi._VERSION was multi.Version, felt it would be more consistent this way… I left the old way of getting the version just in case someone has used that way. It will eventually be gone. Also multi:getVersion() will do the job just as well and keep your code nice and update related bug free!
- Also everything that is included in the: multi/integration/shared/shared.lua (Which is loaded automatically) works in both lanes and love2d environments!
The threaded table is setup just like the threaded queue.
It provids GLOBAL like features without having to write to GLOBAL!
This is useful for module creators who want to keep their data private, but also use GLOBAL like coding.
It has a few features that makes it a bit better than plain ol GLOBAL (For now…)
(ThreadedTable - TT for short) This was modified by a recent version that removed the need for a sync command
+- TT:waitFor(name)
- TT:sync()
- TT[“var”]=value
- print(TT[“var”])
we also have the “sync” method, this one was made for love2d because we do a syncing trick to get data in a table format. The lanes side has a sync method as well so no worries. Using indexing calls sync once and may grab your variable. This allows you to have the lanes indexing ‘like’ syntax when doing regular indexing in love2d side of the module. As of right now both sides work flawlessly! And this effect is now the GLOBAL as well
On GLOBALS sync is a internal method for keeping the GLOBAL table in order. You can still use sThread.waitFor(name) to wait for variables that may or may not yet exist!
Time for some examples:
Using multi:newSystemThreadedTable(name)
"test2",print(test:waitFor("test2"))
end)
multi:mainloop()
-
-- love2d gaming lua! NOTE: this is in main4.lua in the love2d examples
+
">-- love2d lua! NOTE: this is in main4.lua in the love2d examples
require("core.Library")
GLOBAL,sThread=require("multi.integration.loveManager").init() -- load the love2d version of the lanesManager and requires the entire multi library
require("core.GuiManager")
@@ -422,10 +438,10 @@ multi:newThread("test2",print(test:waitFor("test2"))
t.text="DONE!"
end)
-t=gui:newTextLabel("no done yet!",0,0,300,100)
+t=gui:newTextLabel("not done yet!",0,0,300,100)
t:centerX()
t:centerY()
-
Update: 1.8.1
No real change!
Changed the structure of the library. Combined the coroutine based threads into the core!
Only compat and integrations are not part of the core and never will be by nature.
This should make the library more convient to use.
I left multi/all.lua file so if anyone had libraries/projects that used that it will still work!
Updated from 1.7.6 to 1.8.0
(How much thread could a thread thread if a thread could thread thread?)
Added:
+
Update: 1.8.1
No real change!
Changed the structure of the library. Combined the coroutine based threads into the core!
Only compat and integrations are not part of the core and never will be by nature.
This should make the library more convent to use.
I left multi/all.lua file so if anyone had libraries/projects that used that it will still work!
Updated from 1.7.6 to 1.8.0
(How much thread could a thread htread if a thread could thread thread?)
Added:
- multi:newSystemThreadedQueue()
- multi:systemThreadedBenchmark()
- More example files
- multi:canSystemThread() — true if an integration was added false otherwise (For module creation)
- Fixed a few bugs in the loveManager
Using multi:systemThreadedBenchmark()
3):OnBench(3,"All Threads: ")
end)
multi:mainloop()
-
Using multi:newSystemThreadedQueue()
Quick Note: queues shared across multiple objects will be pulling from the same “queue” keep this in mind when coding! Also the queue respects direction a push on the thread side cannot be popped on the thread side… Same goes for the mainthread!
Turns out i was wrong about this…
Using multi:newSystemThreadedQueue()Quick Note: queues shared across multiple objects will be pulling from the same “queue” keep this in mind when coding! Also the queue respects direction a push on the thread side cannot be popped on the thread side… Same goes for the mainthread!
Turns out I was wrong about this…
require(-- Do not make the above local, this is the one difference that the lanesManager does not have
-- If these are local the functions will have the upvalues put into them that do not exist on the threaded side
-- You will need to ensure that the function does not refer to any upvalues in its code. It will print an error if it does though
--- Also each thread has a .1 second delay! This is used to generate a random values for each thread!
+-- Also, each thread has a .1 second delay! This is used to generate a random value for each thread!
require("core.GuiManager")
gui.ff.Color=Color.Black
function multi:newSystemThreadedQueue(name) -- in love2d this will spawn a channel on both ends
@@ -630,8 +646,8 @@ multi:newThread("test!",end
end)
multi:mainloop()
-
Update: 1.7.6
Fixed:
Typos like always
Added:
multi:getPlatform() — returns “love2d” if using the love2d platform or returns “lanes” if using lanes for threading
examples files
In Events added method setTask(func)
The old way still works and is more convient to be honest, but I felt a method to do this was ok.
Updated:
some example files to reflect changes to the core. Changes allow for less typing
loveManager to require the compat if used so you don’t need 2 require line to retrieve the library
Update: 1.7.5
Fixed some typos in the readme… (I am sure there are more there are always more)
Added more features for module support
TODO:
Work on performance of the library… I see 3 places where I can make this thing run quicker
I’ll show case some old versions of the multitasking library eventually so you can see its changes in days past!
Update: 1.7.4
Added: the example folder which will be populated with more examples in the near future!
The loveManager integration that mimics the lanesManager integration almost exactly to keep coding in both enviroments as close to possible. This is done mostly for library creation support!
An example of the loveManager in action using almost the same code as the lanesintergreationtest2.lua
NOTE: This code has only been tested to work on love2d version 1.10.2 thoough it should work version 0.9.0
Update: 1.7.6Fixed:
Typos like always
Added:
multi:getPlatform() — returns “love2d” if using the love2d platform or returns “lanes” if using lanes for threading
examples files
In Events added method setTask(func)
The old way still works and is more convenient to be honest, but I felt a method to do this was needed for completeness.
Updated:
some example files to reflect changes to the core. Changes allow for less typing
loveManager to require the compat if used so you don’t need 2 require line to retrieve the library
Update: 1.7.5
Fixed some typos in the readme… (I am sure there are more there are always more)
Added more features for module support
TODO:
Work on performance of the library… I see 3 places where I can make this thing run quicker
I’ll show case some old versions of the multitasking library eventually so you can see its changes in days past!
Update: 1.7.4
Added: the example folder which will be populated with more examples in the near future!
The loveManager integration that mimics the lanesManager integration almost exactly to keep coding in both environments as close to possible. This is done mostly for library creation support!
An example of the loveManager in action using almost the same code as the lanesintergreationtest2.lua
NOTE: This code has only been tested to work on love2d version 1.10.2 though it should work version 0.9.0
require("core.Library") -- Didn't add this to a repo yet! Will do eventually... Allows for injections and other cool things
-require("multi.compat.love2d") -- allows for multitasking and binds my libraies to the love2d engine that i am using
+require("multi.compat.love2d") -- allows for multitasking and binds my libraries to the love2d engine that i am using
GLOBAL,sThread=require("multi.integration.loveManager").init() -- load the love2d version of the lanesManager
--IMPORTANT
-- Do not make the above local, this is the one difference that the lanesManager does not have
@@ -758,9 +774,9 @@ multi:newThread("test0",-- when the main thread is holding there is a chance that error handling on the system threads may not work!
-- instead we can do this
while true do
- thread.skip(1) -- allow error handling to take place... Otherwise lets keep the main thread running on the low
+ thread.skip(1) -- allow error handling to take place... Otherwise let’s keep the main thread running on the low
-- Before we held just because we could... But this is a game and we need to have logic continue
- --sThreadM.sleep(.001) -- Sleeping for .001 is a greeat way to keep cpu usage down. Make sure if you aren't doing work to rest. Abuse the hell out of GLOBAL if you need to :P
+ --sThreadM.sleep(.001) -- Sleeping for .001 is a great way to keep cpu usage down. Make sure if you aren't doing work to rest. Abuse the hell out of GLOBAL if you need to :P
if GLOBAL["DONE"] then
t.text="Bench: "..GLOBAL["DONE"]
end
@@ -790,7 +806,7 @@ require("multi.task")
require("multi.step")
require("multi.task")
-- ^ they are all part of the core now
-
Update: 1.7.2
Moved updaters, loops, and alarms into the init.lua file. I consider them core features and they are referenced in the init.lua file so they need to exist there. Threaded versions are still separate though. Added another example file
Update: 1.7.1 Bug Fixes Only
Update: 1.7.0
Modified: multi.integration.lanesManager.lua
It is now in a stable and simple state Works with the latest lanes version! Tested with version 3.11 I cannot promise that everything will work with eariler versions. Future versions are good though.
Example Usage:
sThread is a handle to a global interface for system threads to interact with themself
thread is the interface for multithreads as seen in the threading section
GLOBAL a table that can be used throughout each and every thread
sThreads have a few methods
sThread.set(name,val) — you can use the GLOBAL table instead modifies the same table anyway
sThread.get(name) — you can use the GLOBAL table instead modifies the same table anyway
sThread.waitFor(name) — waits until a value exists, if it does it returns it
sThread.getCores() — returns the number of cores on your cpu
sThread.sleep(n) — sleeps for a bit stopping the entire thread from running
sThread.hold(n) — sleeps until a condition is met
Update: 1.7.2Moved updaters, loops, and alarms into the init.lua file. I consider them core features and they are referenced in the init.lua file so they need to exist there. Threaded versions are still separate though. Added another example file
Update: 1.7.1 Bug Fixes Only
Update: 1.7.0
Modified: multi.integration.lanesManager.lua
It is now in a stable and simple state Works with the latest lanes version! Tested with version 3.11 I cannot promise that everything will work with earlier versions. Future versions are good though.
Example Usage:
sThread is a handle to a global interface for system threads to interact with themselves
thread is the interface for multithreads as seen in the threading section
GLOBAL a table that can be used throughout each and every thread
sThreads have a few methods
sThread.set(name,val) — you can use the GLOBAL table instead modifies the same table anyway
sThread.get(name) — you can use the GLOBAL table instead modifies the same table anyway
sThread.waitFor(name) — waits until a value exists, if it does it returns it
sThread.getCores() — returns the number of cores on your cpu
sThread.sleep(n) — sleeps for a bit stopping the entire thread from running
sThread.hold(n) — sleeps until a condition is met
functionprint(dt)
end)
-- Is now
-step:OnStep(function(self,pos) -- same goes for tsteps as well
+step:OnStep(function(self,pos) -- same goes for tsteps as wellc
print(pos)
end)
multi:newLoop(function(self,dt)
print(dt)
end)
-
Reasoning I wanted to keep objects consistant, but a lot of my older libraries use the old way of doing things. Therefore I added a backwards module
require("multi.all")
require("multi.compat.backwards[1,5,0]") -- allows for the use of features that were scrapped/changed in 1.6.0+
Update: 1.5.0
Added:
-- An easy way to manage timeouts
- Small bug fixes
Update: 1.4.1 - First Public release of the library
IMPORTANT:
Every update I make aims to make things simpler more efficent and just better, but a lot of old code, which can be really big, uses a lot of older features. I know the pain of having to rewrite everything. My promise to my library users is that I will always have backwards support for older features! New ways may exist that are quicker and eaiser, but the old features/methods will be supported.
+- An easy way to manage timeouts
- Small bug fixes
IMPORTANT:
Every update I make aims to make things simpler more efficient and just better, but a lot of old code, which can be really big, uses a lot of older features. I know the pain of having to rewrite everything. My promise to my library users is that I will always have backwards support for older features! New ways may exist that are quicker and easier, but the old features/methods will be supported.