v13.0.0 #11

Merged
rayaman merged 20 commits from v13.0.0 into master 2019-03-22 21:21:37 -04:00
19 changed files with 4156 additions and 2292 deletions

4
.gitignore vendored
View File

@ -6,3 +6,7 @@ lanestestclient.lua
lanestest.lua
sample-node.lua
sample-master.lua
Ayn Rand - The Virtue of Selfishness-Mg4QJheclsQ.m4a
Atlas Shrugged by Ayn Rand Audiobook-9s2qrEau63E.webm
test.lua
test.lua

1328
Documentation.html Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

810
README.md
View File

@ -1,9 +1,7 @@
# multi Version: 12.2.2 Some more bug fixes
# multi Version: 13.0.0 Documentation finally done and bug fixes
My multitasking library for lua. It is a pure lua binding, if you ignore the integrations and the love2d compat. If you find any bugs or have any issues, please let me know . **If you don't see a table of contents try using the ReadMe.html file. It is easier to navigate than readme**</br>
[TOC]
INSTALLING
----------
Note: The latest version of Lua lanes is required if you want to make use of system threads on lua 5.1+. I will update the dependencies for Lua rocks since this library should work fine on lua 5.1+ You also need the lua-net library and the bin library. all installed automatically using luarocks. however you can do this manually if lanes and luasocket are installed. Links:
@ -18,9 +16,6 @@ If you want to use the system threads, then you'll need to install lanes!
```
luarocks install multi
```
Note: Soon you may be able to run multitasking code on multiple machines, network parallelism. This however will have to wait until I hammer out some bugs within the core of system threading itself.
See the rambling section to get an idea of how this will work.
Discord
-------
@ -29,20 +24,15 @@ https://discord.gg/U8UspuA</br>
Planned features/TODO
---------------------
- [ ] Make practical examples that show how you can solve real problems
- [ ] Finish the wiki stuff. (11% done) -- It's been at 11% for so long. I really need to get on this!
- [ ] Finish Documentation
- [ ] Test for unknown bugs -- This is always going on
- [x] ~~Network Parallelism~~ This was fun, I have some more plans for this as well
Known Bugs/Issues
-----------------
~~A bug concerns the SystemThreadedJobQueue, only 1 can be used for now. Might change in a future update~~ :D Fixed
Usage:</br>
-----
```lua
-- Basic usage Alarms: Have been moved to the core of the library require("multi") would work as well
require("multi") -- gets the entire library
local multi = require("multi") -- gets the entire library
alarm=multi:newAlarm(3) -- in seconds can go to .001 uses the built in os.clock()
alarm:OnRing(function(a)
print("3 Seconds have passed!")
@ -50,797 +40,7 @@ alarm:OnRing(function(a)
end)
multi:mainloop() -- the main loop of the program, multi:umanager() exists as well to allow integration in other loops Ex: love2d love.update function. More on this binding in the wiki!
```
The library is modular, so you only need to require what you need to. Because of this, the global environment is altered</br>
There are many useful objects that you can use</br>
Check out the wiki for detailed usage, but here are the objects:</br>
- Process#</br>
- Queue#</br>
- Alarm</br>
- Loop</br>
- Event</br>
- Step</br>
- Range</br>
- TStep</br>
- TLoop</br>
- Condition</br>
- Connection</br>
- Timer</br>
- Updater</br>
- Thread*</br>
- Trigger</br>
- Task</br>
- Job</br>
- Function</br>
- Watcher</br>
Note: *Both a process and queue act like the multi namespace but allows for some cool things. Because they use the other objects an example on them will be done last*</br>
*Uses the built in coroutine features of lua, these have an interesting interaction with the other means of multi-tasking</br>
Triggers are kind of useless after the creation of the Connection</br>
Watchers have no real purpose as well I made it just because.</br>
# Examples of each object being used</br>
We already showed alarms in action so lets move on to a Loop object
Throughout these examples I am going to do some strange things to show other features of the library!
LOOPS
-----
```lua
-- Loops: Have been moved to the core of the library require("multi") would work as well
require("multi") -- gets the entire library
count=0
loop=multi:newLoop(function(self,dt) -- dt is delta time and self are a reference to itself
count=count+1
if count > 10 then
self:Break() -- All methods on the multi objects are upper camel case, whereas methods on the multi or process/queuer namespace are lower camel case
-- self:Break() will stop the loop and trigger the OnBreak(func) method
-- Stopping is the act of Pausing and deactivating the object! All objects can have the multiobj:Break() command on it!
else
print("Loop #"..count.."!")
end
end)
loop:OnBreak(function(self)
print("You broke me :(")
end)
multi:mainloop()
```
# Output
Loop #1!</br>
Loop #2!</br>
Loop #3!</br>
Loop #4!</br>
Loop #5!</br>
Loop #6!</br>
Loop #7!</br>
Loop #8!</br>
Loop #9!</br>
Loop #10!</br>
You broke me :(</br>
With loops out of the way lets go down the line
This library aims to be Async like. Everything is still on one thread *unless you are using the lanes integration module WIP* (A stable WIP, more on that later)
EVENTS
------
```lua
-- Events, these were the first objects introduced into the library. I seldomly use them in their pure form though, but later you'll see their advance uses!
-- Events on their own don't really do much... We are going to need 2 objects at least to get something going
require("multi") -- gets the entire library
count=0
-- lets use the loop again to add to count!
loop=multi:newLoop(function(self,dt)
count=count+1
end)
event=multi:newEvent(function() return count==100 end) -- set the event
event:OnEvent(function(self) -- connect to the event object
loop:Pause() -- pauses the loop from running!
print("Stopped that loop!")
end) -- events like alarms need to be reset the Reset() command works here as well
multi:mainloop()
```
# Output
Stopped that loop!
STEPS
-----
```lua
require("multi")
-- Steps, are like for loops but non-blocking... You can run a loop to infinity and everything will still run I will combine Steps with Ranges in this example.
step1=multi:newStep(1,10,1,0) -- Some explaining is due. Argument 1 is the Start # Argument 2 is the ResetAt # (inclusive) Argument 3 is the count # (in our case we are counting by +1, this can be -1 but you need to adjust your start and resetAt numbers)
-- The 4th Argument is for skipping. This is useful for timing and for basic priority management. A priority management system is included!
step2=multi:newStep(10,1,-1,1) -- a second step, notice the slight changes!
step1:OnStart(function(self)
print("Step Started!")
end)
step1:OnStep(function(self,pos)
if pos<=10 then -- The step only goes to 10
print("Stepping... "..pos)
else
print("How did I get here?")
end
end)
step1:OnEnd(function(self)
print("Done!")
-- We finished here, but I feel like we could have reused this step in some way... I could use Reset() , but what if I wanted to change it...
if self.endAt==10 then -- lets only loop once
self:Update(1,11,1,0) -- oh now we can reach that else condition!
end
-- Note Update() will restart the step!
end)
-- step2 is bored lets give it some love :P
step2.range=step2:newRange() -- Set up a range object to have a nested step in a sense! Each nest requires a new range
-- it is in your interest not to share ranges between objects! You can however do it if it suits your needs though
step2:OnStep(function(self,pos)
-- for 1=1,math.huge do
-- print("I am holding the code up because I can!")
--end
-- We dont want to hold things up, but we want to nest.
-- Note a range is not necessary if the nested for loop has a small range, if however, the range is rather large you may want to allow other objects to do some work
for i in self.range(1,100) do
print(pos,i) -- Now our nested for loop is using a range object which allows for other objects to get some CPU time while this one is running
end
end)
-- TSteps are just like alarms and steps mixed together, the only difference in construction is the 4th Argument. On a TStep that argument controls time. The default is 1
-- The Reset(n) works just like you would figure!
step3=multi:newTStep(1,10,.5,2) -- lets go from 1 to 10 counting by .5 every 2 seconds
step3:OnStep(function(self,pos)
print("Ok "..pos.."!")
end)
multi:mainloop()
```
# Output
Note: the output on this one is huge!!! So, I had to ... some parts! You need to run this for yourself to see what is going on!</br>
Step Started!</br>
Stepping... 1</br>
10 1</br>
Stepping... 2</br>
10 2</br>
Stepping... 3</br>
10 3</br>
...</br>
Ok 9.5!</br>
Ok 10!</br>
TLOOPS
------
```lua
require("multi")
-- TLoops are loops that run ever n second. We will also look at condition objects as well
-- Here we are going to modify the old loop to be a little different
count=0
loop=multi:newTLoop(function(self) -- We are only going to count with this loop but doing so using a condition!
while self:condition(self.cond) do
count=count+1
end
print("Count is "..count.."!")
self:Destroy() -- Lets destroy this object, casting it to the dark abyss MUHAHAHA!!!
-- the reference to this object will be a phantom object that does nothing!
end,1) -- Notice the ',1' after the function! This is where you put your time value!
loop.cond=multi:newCondition(function() return count<=100 end) -- conditions need a bit of work before I am happy with them
multi:mainloop()
```
# Output
Count is 101!
Connections
-----------
These are my favorite objects and you'll see why. They are very useful objects for ASync connections!
```lua
require("multi")
-- Lets create the events
yawn={} -- ill just leave that there
OnCustomSafeEvent=multi:newConnection(true) -- lets pcall the calls in case something goes wrong default
OnCustomEvent=multi:newConnection(false) -- lets not pcall the calls and let errors happen... We are good at coding though so lets get a speed advantage by not pcalling. Pcalling is useful for plugins and stuff that may have been coded badly and you can ignore those connections if need be.
OnCustomEvent:Bind(yawn) -- create the connection lookup data in yawn
-- Lets connect to them, a recent update adds a nice syntax to connect to these
cd1=OnCustomSafeEvent:Connect(function(arg1,arg2,...)
print("CSE1",arg1,arg2,...)
end,"bob") -- lets give this connection a name
cd2=OnCustomSafeEvent:Connect(function(arg1,arg2,...)
print("CSE2",arg1,arg2,...)
end,"joe") -- lets give this connection a name
cd3=OnCustomSafeEvent:Connect(function(arg1,arg2,...)
print("CSE3",arg1,arg2,...)
end) -- lets not give this connection a name
-- no need for connect, but I kept that function because of backwards compatibility.
OnCustomEvent(function(arg1,arg2,...)
print(arg1,arg2,...)
end)
-- Now within some loop/other object you trigger the connection like
OnCustomEvent:Fire(1,2,"Hello!!!") -- fire all connections
-- You may have noticed that some events have names! See the following example!
OnCustomSafeEvent:getConnection("bob"):Fire(1,100,"Bye!") -- fire only bob!
OnCustomSafeEvent:getConnection("joe"):Fire(1,100,"Hello!") -- fire only joe!!
OnCustomSafeEvent:Fire(1,100,"Hi Ya Folks!!!") -- fire them all!!!
-- Connections have more to them than that though!
-- As seen above cd1-cd3 these are hooks to the connection object. This allows you to remove a connection
-- For Example:
cd1:Remove() -- remove this connection from the master connection object
print("------")
OnCustomSafeEvent:Fire(1,100,"Hi Ya Folks!!!") -- fire them all again!!!
-- To remove all connections use:
OnCustomSafeEvent:Remove()
print("------")
OnCustomSafeEvent:Fire(1,100,"Hi Ya Folks!!!") -- fire them all again!!!
```
# Output
1 2 Hello!!!</br>
CSE1 1 100 Bye!</br>
CSE2 1 100 Hello!</br>
CSE1 1 100 Hi Ya Folks!!!</br>
CSE2 1 100 Hi Ya Folks!!!</br>
CSE3 1 100 Hi Ya Folks!!!</br>
CSE2 1 100 Hi Ya Folks!!!</br>
CSE3 1 100 Hi Ya Folks!!!</br>
</br>
You may think timers should be bundled with alarms, but they are a bit different and have cool features</br>
TIMERS
------
```lua
-- You see the thing is that all time-based objects use timers e.g. Alarms, TSteps, and Loops. Timers are more low level!
require("multi")
local clock = os.clock
function sleep(n) -- seconds
local t0 = clock()
while clock() - t0 <= n do end
end -- we will use this later!
timer=multi:newTimer()
timer:Start()
-- lets do a mock alarm
set=3 -- 3 seconds
a=0
while timer:Get()<=set do
-- waiting...
a=a+1
end
print(set.." second(s) have passed!")
-- Timers can do one more thing that is interesting and that is pausing them!
timer:Pause()
print(timer:Get()) -- should be really close to 'set'
sleep(3)
print(timer:Get()) -- should be really close to 'set'
timer:Resume()
sleep(1)
print(timer:Get()) -- should be really close to the value of set + 1
timer:Pause()
print(timer:Get()) -- should be really close to 'set'
sleep(3)
print(timer:Get()) -- should be really close to 'set'
timer:Resume()
sleep(1)
print(timer:Get()) -- should be really close to the value of set + 2
```
# Output
Note: This will make more sense when you run it for yourself</br>
3 second(s) have passed!</br>
3.001</br>
3.001</br>
4.002</br>
4.002</br>
4.002</br>
5.003</br>
UPDATER
-------
```lua
-- Updaters: Have been moved to the core of the library require("multi") would work as well
require("multi")
updater=multi:newUpdater(5) -- simple, think of a look with the skip feature of a step
updater:OnUpdate(function(self)
--print("updating...")
end)
-- Here every 5 steps the updater will do stuff!
-- But I feel it is now time to touch into priority management, so lets get into basic priority stuff and get into a more advance version of it
--[[
multi.Priority_Core -- Highest form of priority
multi.Priority_High
multi.Priority_Above_Normal
multi.Priority_Normal -- The default form of priority
multi.Priority_Below_Normal
multi.Priority_Low
multi.Priority_Idle -- Lowest form of priority
Note: These only take effect when you enable priority, otherwise everything is at a core like level!
We aren't going to use regular objects to test priority, but rather benchmarks!
to set priority on an object though you would do
multiobj:setPriority(one of the above)
]]
-- lets bench for 3 seconds using the 3 forms of priority! First no Priority
multi:benchMark(3,nil,"Regular Bench: "):OnBench(function() -- the onbench() allows us to do each bench after each other!
print("P1\n---------------")
multi:enablePriority()
multi:benchMark(3,multi.Priority_Core,"Core:")
multi:benchMark(3,multi.Priority_High,"High:")
multi:benchMark(3,multi.Priority_Above_Normal,"Above_Normal:")
multi:benchMark(3,multi.Priority_Normal,"Normal:")
multi:benchMark(3,multi.Priority_Below_Normal,"Below_Normal:")
multi:benchMark(3,multi.Priority_Low,"Low:")
multi:benchMark(3,multi.Priority_Idle,"Idle:"):OnBench(function()
print("P2\n---------------")
-- Finally, the 3rd form
multi:enablePriority2()
multi:benchMark(3,multi.Priority_Core,"Core:")
multi:benchMark(3,multi.Priority_High,"High:")
multi:benchMark(3,multi.Priority_Above_Normal,"Above_Normal:")
multi:benchMark(3,multi.Priority_Normal,"Normal:")
multi:benchMark(3,multi.Priority_Below_Normal,"Below_Normal:")
multi:benchMark(3,multi.Priority_Low,"Low:")
multi:benchMark(3,multi.Priority_Idle,"Idle:")
end)
end)
multi:mainloop() -- Notice how the past few examples did not need this, well only actors need to be in a loop! More on this in the wiki.
```
# Output
Note: These numbers will vary drastically depending on your compiler and CPU power</br>
Regular Bench: 2094137 Steps in 3 second(s)!</br>
P1</br>
Below_Normal: 236022 Steps in 3 second(s)!</br>
Normal: 314697 Steps in 3 second(s)!</br>
Above_Normal: 393372 Steps in 3 second(s)!</br>
High: 472047 Steps in 3 second(s)!</br>
Core: 550722 Steps in 3 second(s)!</br>
Low: 157348 Steps in 3 second(s)!</br>
Idle: 78674 Steps in 3 second(s)!</br>
P2</br>
Core: 994664 Steps in 3 second(s)!</br>
High: 248666 Steps in 3 second(s)!</br>
Above_Normal: 62166 Steps in 3 second(s)!</br>
Normal: 15541 Steps in 3 second(s)!</br>
Below_Normal: 3885 Steps in 3 second(s)!</br>
Idle: 242 Steps in 3 second(s)!</br>
Low: 971 Steps in 3 second(s)!</br>
Notice: Even though I started each bench at the same time the order that they finished differed the order is likely to vary on your machine as well!</br>
Processes
---------
A process allows you to group the Actor objects within a controllable interface
```lua
require("multi")
proc=multi:newProcess() -- takes an optional file as an argument, but for this example we aren't going to use that
-- a process works just like the multi object!
b=0
loop=proc:newTLoop(function(self)
a=a+1
proc:Pause() -- pauses the CPU cycler for this processor! Individual objects are not paused, however because they aren't getting CPU time they act as if they were paused
end,.1)
updater=proc:newUpdater(multi.Priority_Idle) -- priority can be used in skip arguments as well to manage priority without enabling it!
updater:OnUpdate(function(self)
b=b+1
end)
a=0 -- a counter
loop2=proc:newLoop(function(self,dt)
print("Lets Go!")
self:hold(3) -- this will keep this object from doing anything! Note: You can only have one hold active at a time! Multiple are possible, but results may not be as they seem see * for how hold works
-- Within a process using hold will keep it alive until the hold is satisfied!
print("Done being held for 1 second")
self:hold(function() return a>10 end)
print("A is now: "..a.." b is also: "..b)
self:Destroy()
self.Parent:Pause() -- lets say you don't have the reference to the process!
os.exit()
end)
-- Notice this is now being created on the multi namespace
event=multi:newEvent(function() return os.clock()>=1 end)
event:OnEvent(function(self)
proc:Resume()
self:Destroy()
end)
proc:Start()
multi:mainloop()
```
# Output
Lets Go!</br>
Done being held for 1 second</br>
A is now: 29 b is also: 479</br>
**Hold: This method works as follows**
```lua
function multi:hold(task)
self:Pause() -- pause the current object
self.held=true -- set held
if type(task)=='number' then -- a sleep cmd
local timer=multi:newTimer()
timer:Start()
while timer:Get()<task do -- This while loop is what makes using multiple holds tricky... If the outer while is good before the nested one then the outer one will have to wait! There is a way around this though!
if love then
self.Parent:lManager()
else
self.Parent:Do_Order()
end
end
self:Resume()
self.held=false
elseif type(task)=='function' then
local env=self.Parent:newEvent(task)
env:OnEvent(function(envt) envt:Pause() envt.Active=false end)
while env.Active do
if love then
self.Parent:lManager()
else
self.Parent:Do_Order()
end
end
env:Destroy()
self:Resume()
self.held=false
else
print('Error Data Type!!!')
end
end
```
Queuer (WIP)
------------
A queuer works just like a process however objects are processed in order that they were created...
```lua
require("multi")
queue = multi:newQueuer()
queue:newAlarm(3):OnRing(function()
print("Ring ring!!!")
end)
queue:newStep(1,10):OnStep(function(self,pos)
print(pos)
end)
queue:newLoop(function(self,dt)
if dt==3 then
self:Break()
print("Done")
end
end)
queue:Start()
multi:mainloop()
```
# Expected Output
Note: the queuer still does not work as expected!</br>
Ring ring!!!</br>
1</br>
2</br>
3</br>
4</br>
5</br>
6</br>
7</br>
8</br>
9</br>
10</br>
Done</br>
# Actual Output
Done</br>
1</br>
2</br>
3</br>
4</br>
5</br>
6</br>
7</br>
8</br>
9</br>
10</br>
Ring ring!!!</br>
Threads
-------
These fix the hold problem that you get with regular objects, and they work the same! They even have some extra features that make them really useful.</br>
```lua
require("multi")
test=multi:newThreadedProcess("main") -- you can thread processors and all Actors see note for a list of actors you can thread!
test2=multi:newThreadedProcess("main2")
count=0
test:newLoop(function(self,dt)
count=count+1
thread.sleep(.01)
end)
test2:newLoop(function(self,dt)
print("Hello!")
thread.sleep(1) -- sleep for some time
end)
-- threads take a name object then the rest as normal
step=multi:newThreadedTStep("step",1,10)
step:OnStep(function(self,p)
print("step",p)
thread.skip(21) -- skip n cycles
end)
step:OnEnd(function()
print("Killing thread!")
thread.kill() -- kill the thread
end)
loop=multi:newThreadedLoop("loop",function(self,dt)
print(dt)
thread.sleep(1.1)
end)
loop2=multi:newThreadedLoop("loop",function(self,dt)
print(dt)
thread.hold(function() return count>=100 end)
print("Count is "..count)
os.exit()
end)
alarm=multi:newThreadedAlarm("alarm",1)
alarm:OnRing(function(self)
print("Ring")
self:Reset()
end)
multi:mainloop()
```
# Output
Ring</br>
0.992</br>
0.992</br>
Hello!</br>
step 1</br>
step 2</br>
Hello!</br>
Ring</br>
2.092</br>
step 3</br>
Hello!</br>
Ring</br>
Count is 100</br>
Threadable Actors
Known Bugs/Issues
-----------------
- Alarms
- Events
- Loop/TLoop
- Process
- Step/TStep
Functions
---------
If you ever wanted to pause a function then great now you can
The use of the Function object allows one to have a method that can run free in a sense
```lua
require("multi")
func=multi:newFunction(function(self,arg1,arg2,...)
self:Pause()
return arg1
end)
print(func("Hello"))
print(func("Hello2")) -- returns PAUSED allows for the calling of functions that should only be called once. returns PAUSED instantly if paused
func:Resume()
print(func("Hello3"))
```
# Output
Hello</br>
PAUSED</br>
Hello3</br>
ThreadedUpdater
---------------
```lua
-- Works the same as a regular updater!
require("multi")
multi:newThreadedUpdater("Test",10000):OnUpdate(function(self)
print(self.pos)
end)
multi:mainloop()
```
# Output
1</br>
2</br>
...</br>
.inf</br>
Triggers
--------
Triggers were what I used before connections became a thing, also Function objects are a lot like triggers and can be paused as well, while triggers cannot...</br>
They are simple to use, but in most cases you are better off using a connection</br>
```lua
require("multi")
-- They work like connections but can only have one event binded to them
trig=multi:newTrigger(function(self,a,b,c,...)
print(a,b,c,...)
end)
trig:Fire(1,2,3)
trig:Fire(1,2,3,"Hello",true)
```
# Output
1 2 3</br>
1 2 3 Hello true</br>
Tasks
-----
Tasks allow you to run a block of code before the multi mainloop does it thing. Tasks still have a use but depending on how you program they aren't needed.</br>
```lua
require("multi")
multi:newTask(function()
print("Hi!")
end)
multi:newLoop(function(self,dt)
print("Which came first the task or the loop?")
self:Break()
end)
multi:newTask(function()
print("Hello there!")
end)
multi:mainloop()
```
# Output
Hi!</br>
Hello there!</br>
Which came first the task or the loop?</br>
As seen in the example above the tasks were done before anything else in the mainloop! This is useful when making libraries around the multitasking features and you need things to happen in a certain order!</br>
Jobs
----
Jobs were a strange feature that was created for throttling connections! When I was building an IRC bot around this library I couldn't have messages posting too fast due to restrictions. Jobs allowed functions to be added to a queue that were executed after a certain amount of time has passed
```lua
require("multi") -- jobs use alarms I am pondering if alarms should be added to the core or if jobs should use timers instead...
-- jobs are built into the core of the library so no need to require them
print(multi:hasJobs())
multi:setJobSpeed(1) -- set job speed to 1 second
multi:newJob(function()
print("A job!")
end,"test")
multi:newJob(function()
print("Another job!")
multi:removeJob("test") -- removes all jobs with name "test"
end,"test")
multi:newJob(function()
print("Almost done!")
end,"test")
multi:newJob(function()
print("Final job!")
end,"test")
print(multi:hasJobs())
print("There are "..multi:getJobs().." jobs in the queue!")
multi:mainloop()
```
# Output
false 0</br>
true 4</br>
There are 4 jobs in the queue!</br>
A job!</br></br>
Another job!</br>
Watchers
--------
Watchers allow you to monitor a variable and trigger an event when the variable has changed!
```lua
require("multi")
a=0
watcher=multi:newWatcher(_G,"a") -- watch a in the global environment
watcher:OnValueChanged(function(self,old,new)
print(old,new)
end)
tloop=multi:newTLoop(function(self)
a=a+1
end,1)
multi:mainloop()
```
# Output
0 1</br>
1 2</br>
2 3</br>
...</br>
.inf-1 inf</br>
Timeout management
------------------
```lua
-- Note: I used a tloop, so I could control the output of the program a bit.
require("multi")
a=0
inc=1 -- change to 0 to see it not met at all, 1 if you want to see the first condition not met but the second and 2 if you want to see it meet the condition on the first go.
loop=multi:newTLoop(function(self)
print("Looping...")
a=a+inc
if a==14 then
self:ResolveTimer("1","2","3") -- ... any number of arguments can be passed to the resolve handler
-- this will also automatically pause the object that it is binded to
end
end,.1)
loop:SetTime(1)
loop:OnTimerResolved(function(self,a,b,c) -- the handler will return the self and the passed arguments
print("We did it!",a,b,c)
end)
loop:OnTimedOut(function(self)
if not TheSecondTry then
print("Loop timed out!",self.Type,"Trying again...")
self:ResetTime(2)
self:Resume()
TheSecondTry=true
else
print("We just couldn't do it!") -- print if we don't get anything working
end
end)
multi:mainloop()
```
# Output (Change the value inc as indicated in the comment to see the outcomes!)
Looping...</br>
Looping...</br>
Looping...</br>
Looping...</br>
Looping...</br>
Looping...</br>
Looping...</br>
Looping...</br>
Looping...</br>
Loop timed out! tloop Trying again...</br>
Looping...</br>
Looping...</br>
Looping...</br>
Looping...</br>
Looping...</br>
We did it! 1 2 3</br>
Rambling
--------
5/23/18:
When it comes to running code across different systems we run into a problem. It takes time to send objects from one matching to another. In the beginning only, local networks will be supported. I may add support to send commands to another network to do computing. Like having your own lua cloud. userdata will never be allowed to run on other machines. It is not possible unless the library you are using allows userdata to be turned into a string and back into an object. With this feature you want to send a command that will take time or needs tons of them done millions+, reason being networks are not that "fast" and only simple objects can be sent. If you mirror your environment then you can do some cool things.
The planned structure will be something like this:
multi-Single Threaded Multitasking
multi-Threads
multi-System Threads
multi-Network threads
where netThreads can contain systemThreads which can intern contain both Threads and single threaded multitasking
Nothing has been built yet, but the system will work something like this:
#host:
```lua
sGLOBAL, nGlobal,sThread=require("multi.integration.networkManager").init() -- This will determine if one is using lanes,love2d, or luvit
multi:Host("MainSystem") -- tell the network that this is the main system. Uses broadcast so that nodes know how to find the host!
nThread = multi:newNetworkThread("NetThread_1",function(...)
-- basic usage
nGLOBAL["RemoteVaraible"] = true -- will sync data to all nodes and the host
sGLOBAL["LocalMachineVaraible"] = true -- will sync data to all system threads on the local machine
return "Hello Network!" -- send "Hello Network" back to the host node
end)
multi:mainloop()
```
#node
```lua
GLOBAL,sThread=require("multi.integration.networkManager").init() -- This will determine if one is using lanes,love2d, or luvit
node = multi:newNode("NodeName","MainSystem") -- Search the network for the host, connect to it and be ready for requests!
-- On the main thread, a simple multi:newNetworkThread thread and non-system threads, you can access global data without an issue. When dealing with system threads is when you have a problem.
node:setLog{
maxLines = 10000,
cleanOnInterval = true,
cleanInterval = "day", -- every day Supports(day, week, month, year)
noLog = false -- default is false, make true if you do not need a log
}
node:settings{
maxJobs = 100, -- Job queues will respect this as well as the host when it is figuring out which node is under the least load. Default: 0 or infinite
sendLoadInterval = 60 -- every 60 seconds update the host of the nodes load
sendLoad = true -- default is true, tells the server how stressed the system is
}
multi:mainloop()
-- Note: the node will contain a log of all the commands that it gets. A file called "NodeName.log" will contain the info. You can set the limit by lines or file size. Also, you can set it to clear the log every interval of time if an error does not exist. All errors are both logged and sent to the host as well. You can have more than one host and more than one node(duh :P).
```
The goal of the node is to set up a simple and easy way to run commands on a remote machine.
There are 2 main ways you can use this feature. 1. One node per machine with system threads being able to use the full processing power of the machine. 2. Multiple nodes on one machine where each node is acting like its own thread. And of course, a mix of the two is indeed possible.
Love2d Sleeping reduces the CPU time making my load detection think the system is under more load, thus preventing it from sleeping... I will investigate other means. As of right now it will not eat all your CPU if threads are active. For now, I suggest killing threads that aren't needed anymore. On lanes threads at idle use 0% CPU and it is amazing. A state machine may solve what I need though. One state being idle state that sleeps and only goes into the active state if a job request or data is sent to it... after some time of not being under load it will switch back into the idle state... We'll see what happens.
Love2d doesn't like to send functions through channels. By default, it does not support this. I achieve this by dumping the function and loadstring it on the thread. This however is slow. For the System Threaded Job Queue, I had to change my original idea of sending functions as jobs. The current way you do it now is register a job functions once and then call that job across the thread through a queue. Each worker thread pops from the queue and returns the job. The Job ID is automatically updated and allows you to keep track of the order that the data comes in. A table with # indexes can be used to organize the data...
Regarding benchmarking. If you see my bench marks and are wondering they are 10x better its because I am using luajit for my tests. I highly recommend using luajit for my library, but lua 5.1 will work just as well, but not as fast.
So, while working on the jobQueue:doToAll() method I figured out why love2d's threaded tables were acting up when more than 1 thread was sharing the table. It turns out 1 thread was eating all the pops from the queue and starved all the other queues... Ill need to use the same trick I did with GLOBAL to fix the problem... However, at the rate I am going threading in love will become way slower. I might use the regular GLOBAL to manage data internally for threadedtables...
I have been using this (EventManager --> MultiManager --> now multi) for my own purposes and started making this when I first started learning lua. You can see how the code changed and evolved throughout the years. I tried to include all the versions that still existed on my HDD.
I added my old versions to this library... It started out as the EventManager and was kind of crappy, but it was the start to this library. It kept getting better and better until it became what it is today. There are some features that no longer exist in the latest version, but they were remove because they were useless... I added these files to the GitHub so for those interested can see into my mind in a sense and see how I developed the library before I used GitHub.
The first version of the EventManager was function based not object based and benched at about 2000 steps per second... Yeah that was bad... I used loadstring and it was a mess... Look and see how it grew throughout the years I think it may interest some of you guys!
Currently no bugs that I know of :D

View File

@ -12,209 +12,315 @@
<h1 id="changes"><a name="changes" href="#changes"></a>Changes</h1><p class="toc" style="undefined"></p><ul>
<li><ul>
<li><span class="title">
<a href="#update-12.2.2-time-for-some-more-bug-fixes!" title="Update 12.2.2 Time for some more bug fixes! ">Update 12.2.2 Time for some more bug fixes! </a>
<a href="#update-13.0.0-added-some-documentation,-and-some-new-features-too-check-it-out!" title="Update 13.0.0 Added some documentation, and some new features too check it out!">Update 13.0.0 Added some documentation, and some new features too check it out!</a>
</span>
<!--span class="number">
0
</span-->
</li>
<li><span class="title">
<a href="#update-12.2.1-time-for-some-bug-fixes!" title="Update 12.2.1 Time for some bug fixes! ">Update 12.2.1 Time for some bug fixes! </a>
<a href="#update-12.2.2-time-for-some-more-bug-fixes!" title="Update 12.2.2 Time for some more bug fixes!">Update 12.2.2 Time for some more bug fixes!</a>
</span>
<!--span class="number">
1
</span-->
</li>
<li><span class="title">
<a href="#update-12.2.0" title="Update 12.2.0">Update 12.2.0</a>
<a href="#update-12.2.1-time-for-some-bug-fixes!" title="Update 12.2.1 Time for some bug fixes!">Update 12.2.1 Time for some bug fixes!</a>
</span>
<!--span class="number">
2
</span-->
</li>
<li><span class="title">
<a href="#update-12.1.0" title="Update 12.1.0">Update 12.1.0</a>
<a href="#update-12.2.0" title="Update 12.2.0">Update 12.2.0</a>
</span>
<!--span class="number">
3
</span-->
</li>
<li><span class="title">
<a href="#update:-12.0.0-big-update-(lots-of-additions-some-changes)" title="Update: 12.0.0 Big update (Lots of additions some changes)">Update: 12.0.0 Big update (Lots of additions some changes)</a>
<a href="#update-12.1.0" title="Update 12.1.0">Update 12.1.0</a>
</span>
<!--span class="number">
4
</span-->
</li>
<li><span class="title">
<a href="#update:-1.11.1" title="Update: 1.11.1">Update: 1.11.1</a>
<a href="#update:-12.0.0-big-update-(lots-of-additions-some-changes)" title="Update: 12.0.0 Big update (Lots of additions some changes)">Update: 12.0.0 Big update (Lots of additions some changes)</a>
</span>
<!--span class="number">
5
</span-->
</li>
<li><span class="title">
<a href="#update:-1.11.0" title="Update: 1.11.0">Update: 1.11.0</a>
<a href="#update:-1.11.1" title="Update: 1.11.1">Update: 1.11.1</a>
</span>
<!--span class="number">
6
</span-->
</li>
<li><span class="title">
<a href="#update:-1.10.0" title="Update: 1.10.0">Update: 1.10.0</a>
<a href="#update:-1.11.0" title="Update: 1.11.0">Update: 1.11.0</a>
</span>
<!--span class="number">
7
</span-->
</li>
<li><span class="title">
<a href="#update:-1.9.2" title="Update: 1.9.2">Update: 1.9.2</a>
<a href="#update:-1.10.0" title="Update: 1.10.0">Update: 1.10.0</a>
</span>
<!--span class="number">
8
</span-->
</li>
<li><span class="title">
<a href="#update:-1.9.1" title="Update: 1.9.1">Update: 1.9.1</a>
<a href="#update:-1.9.2" title="Update: 1.9.2">Update: 1.9.2</a>
</span>
<!--span class="number">
9
</span-->
</li>
<li><span class="title">
<a href="#update:-1.9.0" title="Update: 1.9.0">Update: 1.9.0</a>
<a href="#update:-1.9.1" title="Update: 1.9.1">Update: 1.9.1</a>
</span>
<!--span class="number">
10
</span-->
</li>
<li><span class="title">
<a href="#update:-1.8.7" title="Update: 1.8.7">Update: 1.8.7</a>
<a href="#update:-1.9.0" title="Update: 1.9.0">Update: 1.9.0</a>
</span>
<!--span class="number">
11
</span-->
</li>
<li><span class="title">
<a href="#update:-1.8.6" title="Update: 1.8.6">Update: 1.8.6</a>
<a href="#update:-1.8.7" title="Update: 1.8.7">Update: 1.8.7</a>
</span>
<!--span class="number">
12
</span-->
</li>
<li><span class="title">
<a href="#update:-1.8.5" title="Update: 1.8.5">Update: 1.8.5</a>
<a href="#update:-1.8.6" title="Update: 1.8.6">Update: 1.8.6</a>
</span>
<!--span class="number">
13
</span-->
</li>
<li><span class="title">
<a href="#update:-1.8.4" title="Update: 1.8.4">Update: 1.8.4</a>
<a href="#update:-1.8.5" title="Update: 1.8.5">Update: 1.8.5</a>
</span>
<!--span class="number">
14
</span-->
</li>
<li><span class="title">
<a href="#update:-1.8.3" title="Update: 1.8.3">Update: 1.8.3</a>
<a href="#update:-1.8.4" title="Update: 1.8.4">Update: 1.8.4</a>
</span>
<!--span class="number">
15
</span-->
</li>
<li><span class="title">
<a href="#update:-1.8.2" title="Update: 1.8.2">Update: 1.8.2</a>
<a href="#update:-1.8.3" title="Update: 1.8.3">Update: 1.8.3</a>
</span>
<!--span class="number">
16
</span-->
</li>
<li><span class="title">
<a href="#update:-1.8.1" title="Update: 1.8.1">Update: 1.8.1</a>
<a href="#update:-1.8.2" title="Update: 1.8.2">Update: 1.8.2</a>
</span>
<!--span class="number">
17
</span-->
</li>
<li><span class="title">
<a href="#update:-1.7.6" title="Update: 1.7.6">Update: 1.7.6</a>
<a href="#update:-1.8.1" title="Update: 1.8.1">Update: 1.8.1</a>
</span>
<!--span class="number">
18
</span-->
</li>
<li><span class="title">
<a href="#update:-1.7.5" title="Update: 1.7.5">Update: 1.7.5</a>
<a href="#update:-1.7.6" title="Update: 1.7.6">Update: 1.7.6</a>
</span>
<!--span class="number">
19
</span-->
</li>
<li><span class="title">
<a href="#update:-1.7.4" title="Update: 1.7.4">Update: 1.7.4</a>
<a href="#update:-1.7.5" title="Update: 1.7.5">Update: 1.7.5</a>
</span>
<!--span class="number">
20
</span-->
</li>
<li><span class="title">
<a href="#update:-1.7.3" title="Update: 1.7.3">Update: 1.7.3</a>
<a href="#update:-1.7.4" title="Update: 1.7.4">Update: 1.7.4</a>
</span>
<!--span class="number">
21
</span-->
</li>
<li><span class="title">
<a href="#update:-1.7.2" title="Update: 1.7.2">Update: 1.7.2</a>
<a href="#update:-1.7.3" title="Update: 1.7.3">Update: 1.7.3</a>
</span>
<!--span class="number">
22
</span-->
</li>
<li><span class="title">
<a href="#update:-1.7.1-bug-fixes-only" title="Update: 1.7.1 Bug Fixes Only">Update: 1.7.1 Bug Fixes Only</a>
<a href="#update:-1.7.2" title="Update: 1.7.2">Update: 1.7.2</a>
</span>
<!--span class="number">
23
</span-->
</li>
<li><span class="title">
<a href="#update:-1.7.0" title="Update: 1.7.0">Update: 1.7.0</a>
<a href="#update:-1.7.1-bug-fixes-only" title="Update: 1.7.1 Bug Fixes Only">Update: 1.7.1 Bug Fixes Only</a>
</span>
<!--span class="number">
24
</span-->
</li>
<li><span class="title">
<a href="#update:-1.6.0" title="Update: 1.6.0">Update: 1.6.0</a>
<a href="#update:-1.7.0" title="Update: 1.7.0">Update: 1.7.0</a>
</span>
<!--span class="number">
25
</span-->
</li>
<li><span class="title">
<a href="#update:-1.5.0" title="Update: 1.5.0">Update: 1.5.0</a>
<a href="#update:-1.6.0" title="Update: 1.6.0">Update: 1.6.0</a>
</span>
<!--span class="number">
26
</span-->
</li>
<li><span class="title">
<a href="#update:-1.4.1---first-public-release-of-the-library" title="Update: 1.4.1 - First Public release of the library">Update: 1.4.1 - First Public release of the library</a>
<a href="#update:-1.5.0" title="Update: 1.5.0">Update: 1.5.0</a>
</span>
<!--span class="number">
27
</span-->
</li>
<li><span class="title">
<a href="#update:-1.4.1---first-public-release-of-the-library" title="Update: 1.4.1 - First Public release of the library">Update: 1.4.1 - First Public release of the library</a>
</span>
<!--span class="number">
28
</span-->
</li>
</ul>
</li>
</ul>
<p></p><h2 id="update-12.2.2-time-for-some-more-bug-fixes!"><a name="update-12.2.2-time-for-some-more-bug-fixes!" href="#update-12.2.2-time-for-some-more-bug-fixes!"></a>Update 12.2.2 Time for some more bug fixes! </h2><p>Fixed: multi.Stop() not actually stopping due to the new pirority management scheme and preformance boost changes.<br>Thats all for this update</p><h2 id="update-12.2.1-time-for-some-bug-fixes!"><a name="update-12.2.1-time-for-some-bug-fixes!" href="#update-12.2.1-time-for-some-bug-fixes!"></a>Update 12.2.1 Time for some bug fixes! </h2><p>Fixed: SystemThreadedJobQueues</p><ul>
<p></p><h2 id="update-13.0.0-added-some-documentation,-and-some-new-features-too-check-it-out!"><a name="update-13.0.0-added-some-documentation,-and-some-new-features-too-check-it-out!" href="#update-13.0.0-added-some-documentation,-and-some-new-features-too-check-it-out!"></a>Update 13.0.0 Added some documentation, and some new features too check it out!</h2><p><strong>Quick note</strong> on the 13.0.0 update:<br>This update I went all in finding bugs and improving proformance within the library. I added some new features and the new task manager, which I used as a way to debug the library was a great help, so much so thats it is now a permanent feature. Its been about half a year since my last update, but so much work needed to be done. I hope you can find a use in your code to use my library. I am extremely proud of my work; 7 years of development, I learned so much about lua and programming through the creation of this library. It was fun, but there will always be more to add and bugs crawling there way in. I cant wait to see where this library goes in the future!</p><p>Fixed: Tons of bugs, I actually went through the entire library and did a full test of everything, I mean everything, while writing the documentation.<br>Changed: </p><ul>
<li>A few things, to make concepts in the library more clear.</li><li>The way functions returned paused status. Before it would return “PAUSED” now it returns nil, true if paused</li><li>Modified the connection object to allow for some more syntaxial suger!</li><li>System threads now trigger an OnError connection that is a member of the object itself. multi.OnError() is no longer triggered for a system thread that crashes!</li></ul><p>Connection Example:</p><pre class="lua hljs"><code class="lua" data-origin="<pre><code class=&quot;lua&quot;>loop = multi:newTLoop(function(self)
self:OnLoops() -- new way to Fire a connection! Only works when used on a multi object, bin objects, or any object that contains a Type variable
end,1)
loop.OnLoops = multi:newConnection()
loop.OnLoops(function()
print(&quot;Looping&quot;)
end)
multi:mainloop()
</code></pre>">loop = multi:newTLoop(<span class="hljs-function"><span class="hljs-keyword">function</span><span class="hljs-params">(self)</span></span>
self:OnLoops() <span class="hljs-comment">-- new way to Fire a connection! Only works when used on a multi object, bin objects, or any object that contains a Type variable</span>
<span class="hljs-keyword">end</span>,<span class="hljs-number">1</span>)
loop.OnLoops = multi:newConnection()
loop.OnLoops(<span class="hljs-function"><span class="hljs-keyword">function</span><span class="hljs-params">()</span></span>
<span class="hljs-built_in">print</span>(<span class="hljs-string">"Looping"</span>)
<span class="hljs-keyword">end</span>)
multi:mainloop()
</code></pre><p>Function Example:</p><pre class="lua hljs"><code class="lua" data-origin="<pre><code class=&quot;lua&quot;>func = multi:newFunction(function(self,a,b)
self:Pause()
return 1,2,3
end)
print(func()) -- returns: 1, 2, 3
print(func()) -- nil, true
</code></pre>">func = multi:newFunction(<span class="hljs-function"><span class="hljs-keyword">function</span><span class="hljs-params">(self,a,b)</span></span>
self:Pause()
<span class="hljs-keyword">return</span> <span class="hljs-number">1</span>,<span class="hljs-number">2</span>,<span class="hljs-number">3</span>
<span class="hljs-keyword">end</span>)
<span class="hljs-built_in">print</span>(func()) <span class="hljs-comment">-- returns: 1, 2, 3</span>
<span class="hljs-built_in">print</span>(func()) <span class="hljs-comment">-- nil, true</span>
</code></pre><p>Removed:</p><ul>
<li>Ranges and conditions — corutine based threads can emulate what these objects did and much better!</li><li>Due to the creation of hyper threaded processes the following objects are no more!<br><del>multi:newThreadedEvent()</del><br><del>multi:newThreadedLoop()</del><br><del>multi:newThreadedTLoop()</del><br><del>multi:newThreadedStep()</del><br><del>multi:newThreadedTStep()</del><br><del>multi:newThreadedAlarm()</del><br><del>multi:newThreadedUpdater()</del><br><del>multi:newTBase()</del> — Acted as the base for creating the other objects</li></ul><p>These didnt have much use in their previous form, but with the addition of hyper threaded processes the goals that these objects aimed to solve are now possible using a process</p><p>Fixed:</p><ul>
<li>There were some bugs in the networkmanager.lua file. Desrtoy -&gt; Destroy some misspellings.</li><li>Massive object management bugs which caused performance to drop like a rock.</li><li>Found a bug with processors not having the Destroy() function implemented properly.</li><li>Found an issue with the rockspec which is due to the networkManager additon. The net Library and the multi Library are now codependent if using that feature. Going forward you will have to now install the network library separately</li><li>Insane proformance bug found in the networkManager file, where each connection to a node created a new thread (VERY BAD) If say you connected to 100s of threads, you would lose a lot of processing power due to a bad implementation of this feature. But it goes futhur than this, the net library also creates a new thread for each connection made, so times that initial 100 by about 3, you end up with a system that quickly eats itself. I have to do tons of rewriting of everything. Yet another setback for the 13.0.0 release (Im releasing 13.0.0 though this hasnt been ironed out just yet)</li><li>Fixed an issue where any argument greater than 256^2 or 65536 bytes is sent the networkmanager would soft crash. This was fixed by increading the limit to 256^4 or 4294967296. The fix was changing a 2 to a 4. Arguments greater than 256^4 would be impossible in 32 bit lua, and highly unlikely even in lua 64 bit. Perhaps someone is reading an entire file into ram and then sending the entire file that they read over a socket for some reason all at once!?</li><li>Fixed an issue with processors not properly destroying objects within them and not being destroyable themselves</li><li>Fixed a bug where pause and resume would duplicate objects! Not good</li><li>Noticed that the switching of lua states, corutine based threading, is slower than multi-objs (Not by much though).</li><li>multi:newSystemThreadedConnection(name,protect) — I did it! It works and I believe all the gotchas are fixed as well.<br>— Issue one, if a thread died that was connected to that connection all connections would stop since the queue would get clogged! FIXED<br>— There is one thing, the connection does have some handshakes that need to be done before it functions as normal!</li></ul><p>Added:</p><ul>
<li>Documentation, the purpose of 13.0.0, orginally going to be 12.2.3, but due to the amount of bugs and features added it couldnt be a simple bug fix update.</li><li>multi:newHyperThreadedProcess(STRING name) — This is a version of the threaded process that gives each object created its own coroutine based thread which means you can use thread.* without affecting other objects created within the hyper threaded processes. Though, creating a self contained single thread is a better idea which when I eventually create the wiki page Ill discuss</li><li>multi:newConnector() — A simple object that allows you to use the new connection Fire syntax without using a multi obj or the standard object format that I follow.</li><li>multi:purge() — Removes all references to objects that are contained withing the processes list of tasks to do. Doing this will stop all objects from functioning. Calling Resume on an object should make it work again.</li><li>multi:getTasksDetails(STRING format) — Simple function, will get massive updates in the future, as of right now It will print out the current processes that are running; listing their type, uptime, and priority. More useful additions will be added in due time. Format can be either a string “s” or “t” see below for the table format</li><li>multi:endTask(TID) — Use multi:getTasksDetails(“t”) to get the tid of a task</li><li>multi:enableLoadDetection() — Reworked how load detection works. It gives better values now, but it still needs some work before I am happy with it</li><li>THREAD.getID() — returns a unique ID for the current thread. This varaiable is visible to the main thread as well by accessing it through the returned thread object. OBJ.Id Do not confuse this with thread.* this refers to the system threading interface. Each thread, including the main thread has a threadID the main thread has an ID of 0!</li><li>multi.print(…) works like normal print, but only prints if the setting print is set to true</li><li>setting: <code>print</code> enables multi.print() to work</li><li>STC: IgnoreSelf defaults to false, if true a Fire command will not be sent to the self</li><li>STC: OnConnectionAdded(function(connID)) — Is fired when a connection is added you can use STC:FireTo(id,…) to trigger a specific connection. Works like the named non threaded connections, only the ids are genereated for you.</li><li>STC: FireTo(id,…) — Described above.</li></ul><pre class="lua hljs"><code class="lua" data-origin="<pre><code class=&quot;lua&quot;>package.path=&quot;?/init.lua;?.lua;&quot;..package.path
local multi = require(&quot;multi&quot;)
conn = multi:newConnector()
conn.OnTest = multi:newConnection()
conn.OnTest(function()
print(&quot;Yes!&quot;)
end)
test = multi:newHyperThreadedProcess(&quot;test&quot;)
test:newTLoop(function()
print(&quot;HI!&quot;)
conn:OnTest()
end,1)
test:newLoop(function()
print(&quot;HEY!&quot;)
thread.sleep(.5)
end)
multi:newAlarm(3):OnRing(function()
test:Sleep(10)
end)
test:Start()
multi:mainloop()
</code></pre>"><span class="hljs-built_in">package</span>.path=<span class="hljs-string">"?/init.lua;?.lua;"</span>..<span class="hljs-built_in">package</span>.path
<span class="hljs-keyword">local</span> multi = <span class="hljs-built_in">require</span>(<span class="hljs-string">"multi"</span>)
conn = multi:newConnector()
conn.OnTest = multi:newConnection()
conn.OnTest(<span class="hljs-function"><span class="hljs-keyword">function</span><span class="hljs-params">()</span></span>
<span class="hljs-built_in">print</span>(<span class="hljs-string">"Yes!"</span>)
<span class="hljs-keyword">end</span>)
test = multi:newHyperThreadedProcess(<span class="hljs-string">"test"</span>)
test:newTLoop(<span class="hljs-function"><span class="hljs-keyword">function</span><span class="hljs-params">()</span></span>
<span class="hljs-built_in">print</span>(<span class="hljs-string">"HI!"</span>)
conn:OnTest()
<span class="hljs-keyword">end</span>,<span class="hljs-number">1</span>)
test:newLoop(<span class="hljs-function"><span class="hljs-keyword">function</span><span class="hljs-params">()</span></span>
<span class="hljs-built_in">print</span>(<span class="hljs-string">"HEY!"</span>)
thread.sleep(.<span class="hljs-number">5</span>)
<span class="hljs-keyword">end</span>)
multi:newAlarm(<span class="hljs-number">3</span>):OnRing(<span class="hljs-function"><span class="hljs-keyword">function</span><span class="hljs-params">()</span></span>
test:Sleep(<span class="hljs-number">10</span>)
<span class="hljs-keyword">end</span>)
test:Start()
multi:mainloop()
</code></pre><p>Table format for getTasksDetails(STRING format)</p><pre class="lua hljs"><code class="lua" data-origin="<pre><code class=&quot;lua&quot;>{
{TID = 1,Type=&quot;&quot;,Priority=&quot;&quot;,Uptime=0}
{TID = 2,Type=&quot;&quot;,Priority=&quot;&quot;,Uptime=0}
...
{TID = n,Type=&quot;&quot;,Priority=&quot;&quot;,Uptime=0}
ThreadCount = 0
threads={
[Thread_Name]={
Uptime = 0
}
}
}
</code></pre>">{
{TID = <span class="hljs-number">1</span>,Type=<span class="hljs-string">""</span>,Priority=<span class="hljs-string">""</span>,Uptime=<span class="hljs-number">0</span>}
{TID = <span class="hljs-number">2</span>,Type=<span class="hljs-string">""</span>,Priority=<span class="hljs-string">""</span>,Uptime=<span class="hljs-number">0</span>}
...
{TID = n,Type=<span class="hljs-string">""</span>,Priority=<span class="hljs-string">""</span>,Uptime=<span class="hljs-number">0</span>}
ThreadCount = <span class="hljs-number">0</span>
threads={
[Thread_Name]={
Uptime = <span class="hljs-number">0</span>
}
}
}
</code></pre><p><strong>Note:</strong> After adding the getTasksDetails() function I noticed many areas where threads, and tasks were not being cleaned up and fixed the leaks. I also found out that a lot of tasks were starting by default and made them enable only. If you compare the benchmark from this version to last version you;ll notice a signifacant increase in performance.</p><p><strong>Going forward:</strong></p><ul>
<li>Work on system threaded functions</li><li>work on the node manager</li><li>patch up bugs</li><li>finish documentstion</li></ul><h2 id="update-12.2.2-time-for-some-more-bug-fixes!"><a name="update-12.2.2-time-for-some-more-bug-fixes!" href="#update-12.2.2-time-for-some-more-bug-fixes!"></a>Update 12.2.2 Time for some more bug fixes!</h2><p>Fixed: multi.Stop() not actually stopping due to the new pirority management scheme and preformance boost changes.<br>Thats all for this update</p><h2 id="update-12.2.1-time-for-some-bug-fixes!"><a name="update-12.2.1-time-for-some-bug-fixes!" href="#update-12.2.1-time-for-some-bug-fixes!"></a>Update 12.2.1 Time for some bug fixes!</h2><p>Fixed: SystemThreadedJobQueues</p><ul>
<li>You can now make as many job queues as you want! Just a warning when using a large amount of cores for the queue it takes a second or 2 to set up the jobqueues for data transfer. I am unsure if this is a lanes thing or not, but love2d has no such delay when setting up the jobqueue!</li><li>You now connect to the OnReady in the jobqueue object. No more holding everything else as you wait for a job queue to be ready</li><li>Jobqueues:doToAll now passes the queues multi interface as the first and currently only argument</li><li>No longer need to use jobqueue.OnReady() The code is smarter and will send the pushed jobs automatically when the threads are ready</li></ul><p>Fixed: SystemThreadedConnection</p><ul>
<li>They work the exact same way as before, but actually work as expected now. The issue before was how i implemented it. Now each connection knows the number of instances of that object that ecist. This way I no longer have to do fancy timings that may or may not work. I can send exactly enough info for each connection to consume from the queue.</li></ul><p>Removed: multi:newQueuer</p><ul>
<li>This feature has no real use after corutine based threads were introduced. You can use those to get the same effect as the queuer and do it better too. </li></ul><p>Going forward:</p><ul>
<li>This feature has no real use after corutine based threads were introduced. You can use those to get the same effect as the queuer and do it better too. </li></ul><p>Going forwardGoing forward:</p><ul>
<li>Will I ever finish steralization? Who knows, but being able to save state would be nice. The main issue is there is no simple way to save state. While I can provide methods to allow one to turn the objects into strings and back, there is no way for me to make your code work with it in a simple way. For now only the basic functions will be here.</li><li>I need to make better documentation for this library as well. In its current state, all I have are examples and not a list of what is what.</li></ul><h1 id="example"><a name="example" href="#example"></a>Example</h1><pre class="lua hljs"><code class="lua" data-origin="<pre><code class=&quot;lua&quot;>package.path=&quot;?/init.lua;?.lua;&quot;..package.path
multi = require(&quot;multi&quot;)
GLOBAL, THREAD = require(&quot;multi.integration.lanesManager&quot;).init()

View File

@ -1,5 +1,128 @@
#Changes
[TOC]
Update 13.0.0 Added some documentation, and some new features too check it out!
-------------
**Quick note** on the 13.0.0 update:
This update I went all in finding bugs and improving proformance within the library. I added some new features and the new task manager, which I used as a way to debug the library was a great help, so much so thats it is now a permanent feature. It's been about half a year since my last update, but so much work needed to be done. I hope you can find a use in your code to use my library. I am extremely proud of my work; 7 years of development, I learned so much about lua and programming through the creation of this library. It was fun, but there will always be more to add and bugs crawling there way in. I can't wait to see where this library goes in the future!
Fixed: Tons of bugs, I actually went through the entire library and did a full test of everything, I mean everything, while writing the documentation.
Changed:
- A few things, to make concepts in the library more clear.
- The way functions returned paused status. Before it would return "PAUSED" now it returns nil, true if paused
- Modified the connection object to allow for some more syntaxial suger!
- System threads now trigger an OnError connection that is a member of the object itself. multi.OnError() is no longer triggered for a system thread that crashes!
Connection Example:
```lua
loop = multi:newTLoop(function(self)
self:OnLoops() -- new way to Fire a connection! Only works when used on a multi object, bin objects, or any object that contains a Type variable
end,1)
loop.OnLoops = multi:newConnection()
loop.OnLoops(function()
print("Looping")
end)
multi:mainloop()
```
Function Example:
```lua
func = multi:newFunction(function(self,a,b)
self:Pause()
return 1,2,3
end)
print(func()) -- returns: 1, 2, 3
print(func()) -- nil, true
```
Removed:
- Ranges and conditions -- corutine based threads can emulate what these objects did and much better!
- Due to the creation of hyper threaded processes the following objects are no more!
-- ~~multi:newThreadedEvent()~~
-- ~~multi:newThreadedLoop()~~
-- ~~multi:newThreadedTLoop()~~
-- ~~multi:newThreadedStep()~~
-- ~~multi:newThreadedTStep()~~
-- ~~multi:newThreadedAlarm()~~
-- ~~multi:newThreadedUpdater()~~
-- ~~multi:newTBase()~~ -- Acted as the base for creating the other objects
These didn't have much use in their previous form, but with the addition of hyper threaded processes the goals that these objects aimed to solve are now possible using a process
Fixed:
- There were some bugs in the networkmanager.lua file. Desrtoy -> Destroy some misspellings.
- Massive object management bugs which caused performance to drop like a rock.
- Found a bug with processors not having the Destroy() function implemented properly.
- Found an issue with the rockspec which is due to the networkManager additon. The net Library and the multi Library are now codependent if using that feature. Going forward you will have to now install the network library separately
- Insane proformance bug found in the networkManager file, where each connection to a node created a new thread (VERY BAD) If say you connected to 100s of threads, you would lose a lot of processing power due to a bad implementation of this feature. But it goes futhur than this, the net library also creates a new thread for each connection made, so times that initial 100 by about 3, you end up with a system that quickly eats itself. I have to do tons of rewriting of everything. Yet another setback for the 13.0.0 release (Im releasing 13.0.0 though this hasn't been ironed out just yet)
- Fixed an issue where any argument greater than 256^2 or 65536 bytes is sent the networkmanager would soft crash. This was fixed by increading the limit to 256^4 or 4294967296. The fix was changing a 2 to a 4. Arguments greater than 256^4 would be impossible in 32 bit lua, and highly unlikely even in lua 64 bit. Perhaps someone is reading an entire file into ram and then sending the entire file that they read over a socket for some reason all at once!?
- Fixed an issue with processors not properly destroying objects within them and not being destroyable themselves
- Fixed a bug where pause and resume would duplicate objects! Not good
- Noticed that the switching of lua states, corutine based threading, is slower than multi-objs (Not by much though).
- multi:newSystemThreadedConnection(name,protect) -- I did it! It works and I believe all the gotchas are fixed as well.
-- Issue one, if a thread died that was connected to that connection all connections would stop since the queue would get clogged! FIXED
-- There is one thing, the connection does have some handshakes that need to be done before it functions as normal!
Added:
- Documentation, the purpose of 13.0.0, orginally going to be 12.2.3, but due to the amount of bugs and features added it couldn't be a simple bug fix update.
- multi:newHyperThreadedProcess(STRING name) -- This is a version of the threaded process that gives each object created its own coroutine based thread which means you can use thread.* without affecting other objects created within the hyper threaded processes. Though, creating a self contained single thread is a better idea which when I eventually create the wiki page I'll discuss
- multi:newConnector() -- A simple object that allows you to use the new connection Fire syntax without using a multi obj or the standard object format that I follow.
- multi:purge() -- Removes all references to objects that are contained withing the processes list of tasks to do. Doing this will stop all objects from functioning. Calling Resume on an object should make it work again.
- multi:getTasksDetails(STRING format) -- Simple function, will get massive updates in the future, as of right now It will print out the current processes that are running; listing their type, uptime, and priority. More useful additions will be added in due time. Format can be either a string "s" or "t" see below for the table format
- multi:endTask(TID) -- Use multi:getTasksDetails("t") to get the tid of a task
- multi:enableLoadDetection() -- Reworked how load detection works. It gives better values now, but it still needs some work before I am happy with it
- THREAD.getID() -- returns a unique ID for the current thread. This varaiable is visible to the main thread as well by accessing it through the returned thread object. OBJ.Id Do not confuse this with thread.* this refers to the system threading interface. Each thread, including the main thread has a threadID the main thread has an ID of 0!
- multi.print(...) works like normal print, but only prints if the setting print is set to true
- setting: `print` enables multi.print() to work
- STC: IgnoreSelf defaults to false, if true a Fire command will not be sent to the self
- STC: OnConnectionAdded(function(connID)) -- Is fired when a connection is added you can use STC:FireTo(id,...) to trigger a specific connection. Works like the named non threaded connections, only the id's are genereated for you.
- STC: FireTo(id,...) -- Described above.
```lua
package.path="?/init.lua;?.lua;"..package.path
local multi = require("multi")
conn = multi:newConnector()
conn.OnTest = multi:newConnection()
conn.OnTest(function()
print("Yes!")
end)
test = multi:newHyperThreadedProcess("test")
test:newTLoop(function()
print("HI!")
conn:OnTest()
end,1)
test:newLoop(function()
print("HEY!")
thread.sleep(.5)
end)
multi:newAlarm(3):OnRing(function()
test:Sleep(10)
end)
test:Start()
multi:mainloop()
```
Table format for getTasksDetails(STRING format)
```lua
{
{TID = 1,Type="",Priority="",Uptime=0}
{TID = 2,Type="",Priority="",Uptime=0}
...
{TID = n,Type="",Priority="",Uptime=0}
ThreadCount = 0
threads={
[Thread_Name]={
Uptime = 0
}
}
}
```
**Note:** After adding the getTasksDetails() function I noticed many areas where threads, and tasks were not being cleaned up and fixed the leaks. I also found out that a lot of tasks were starting by default and made them enable only. If you compare the benchmark from this version to last version you;ll notice a signifacant increase in performance.
**Going forward:**
- Work on system threaded functions
- work on the node manager
- patch up bugs
- finish documentstion
Update 12.2.2 Time for some more bug fixes!
-------------
Fixed: multi.Stop() not actually stopping due to the new pirority management scheme and preformance boost changes.
@ -19,7 +142,7 @@ Fixed: SystemThreadedConnection
Removed: multi:newQueuer
- This feature has no real use after corutine based threads were introduced. You can use those to get the same effect as the queuer and do it better too.
Going forward:
Going forwardGoing forward:
- Will I ever finish steralization? Who knows, but being able to save state would be nice. The main issue is there is no simple way to save state. While I can provide methods to allow one to turn the objects into strings and back, there is no way for me to make your code work with it in a simple way. For now only the basic functions will be here.
- I need to make better documentation for this library as well. In its current state, all I have are examples and not a list of what is what.

View File

View File

@ -5,30 +5,13 @@ nGLOBAL = require("multi.integration.networkManager").init()
node = multi:newNode{
crossTalk = false, -- default value, allows nodes to talk to eachother. WIP NOT READY YET!
allowRemoteRegistering = true, -- allows you to register functions from the master on the node, default is false
name = nil, --"TESTNODE", -- default value is nil, if nil a random name is generated. Naming nodes are important if you assign each node on a network with a different task
name = "MASTERPC", -- default value is nil, if nil a random name is generated. Naming nodes are important if you assign each node on a network with a different task
--noBroadCast = true, -- if using the node manager, set this to true to save on some cpu cycles
--managerDetails = {"localhost",12345}, -- connects to the node manager if one exists
}
function RemoteTest(a,b,c) -- a function that we will be executing remotely
--print("Yes I work!",a,b,c)
multi:newThread("waiter",function()
print("Hello!")
while true do
thread.sleep(2)
node:pushTo("Main","This is a test")
end
end)
print("Yes I work!",a,b,c)
end
multi:newThread("some-test",function()
local dat = node:pop()
while true do
thread.skip(10)
if dat then
print(dat)
end
dat = node:pop()
end
end,"NODE_TESTNODE")
settings = {
priority = 0, -- 1 or 2
stopOnError = true,

View File

@ -1,7 +1,7 @@
--[[
MIT License
Copyright (c) 2018 Ryan Ward
Copyright (c) 2019 Ryan Ward
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -37,6 +37,7 @@ multi.OnMouseMoved = multi:newConnection()
multi.OnDraw = multi:newConnection()
multi.OnTextInput = multi:newConnection()
multi.OnUpdate = multi:newConnection()
multi.OnQuit = multi:newConnection()
multi.OnPreLoad(function()
local function Hook(func,conn)
if love[func]~=nil then
@ -51,6 +52,7 @@ multi.OnPreLoad(function()
end
end
end
Hook("quit",multi.OnQuit)
Hook("keypressed",multi.OnKeyPressed)
Hook("keyreleased",multi.OnKeyReleased)
Hook("mousepressed",multi.OnMousePressed)
@ -67,4 +69,8 @@ multi.OnPreLoad(function()
end
end)
end)
multi.OnQuit(function()
multi.Stop()
love.event.quit()
end)
return multi

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,7 @@
--[[
MIT License
Copyright (c) 2018 Ryan Ward
Copyright (c) 2019 Ryan Ward
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -32,6 +32,8 @@ end
-- Step 1 get lanes
lanes=require("lanes").configure()
local multi = require("multi") -- get it all and have it on all lanes
multi.SystemThreads = {}
local thread = thread
multi.isMainThread=true
function multi:canSystemThread()
return true
@ -39,10 +41,10 @@ end
function multi:getPlatform()
return "lanes"
end
-- Step 2 set up the linda objects
-- Step 2 set up the Linda objects
local __GlobalLinda = lanes.linda() -- handles global stuff
local __SleepingLinda = lanes.linda() -- handles sleeping stuff
-- For convience a GLOBAL table will be constructed to handle requests
-- For convenience a GLOBAL table will be constructed to handle requests
local GLOBAL={}
setmetatable(GLOBAL,{
__index=function(t,k)
@ -52,7 +54,7 @@ setmetatable(GLOBAL,{
__GlobalLinda:set(k,v)
end,
})
-- Step 3 rewrite the thread methods to use lindas
-- Step 3 rewrite the thread methods to use Lindas
local THREAD={}
function THREAD.set(name,val)
__GlobalLinda:set(name,val)
@ -82,6 +84,9 @@ end
function THREAD.getCores()
return THREAD.__CORES
end
function THREAD.getThreads()
return GLOBAL.__THREADS__
end
if os.getOS()=="windows" then
THREAD.__CORES=tonumber(os.getenv("NUMBER_OF_PROCESSORS"))
else
@ -93,6 +98,10 @@ end
function THREAD.getName()
return THREAD_NAME
end
function THREAD.getID()
return THREAD_ID
end
_G.THREAD_ID = 0
--[[ Step 4 We need to get sleeping working to handle timing... We want idle wait, not busy wait
Idle wait keeps the CPU running better where busy wait wastes CPU cycles... Lanes does not have a sleep method
however, a linda recieve will in fact be a idle wait! So we use that and wrap it in a nice package]]
@ -109,36 +118,74 @@ function THREAD.hold(n)
end
local rand = math.random(1,10000000)
-- Step 5 Basic Threads!
local threads = {}
local count = 1
local started = false
local livingThreads = {}
function multi:newSystemThread(name,func,...)
multi.InitSystemThreadErrorHandler()
rand = math.random(1,10000000)
local c={}
local __self=c
c.name=name
c.Name = name
c.Id = count
livingThreads[count] = {true,name}
local THREAD_ID = count
count = count + 1
c.Type="sthread"
c.creationTime = os.clock()
local THREAD_NAME=name
local function func2(...)
local multi = require("multi")
_G["THREAD_NAME"]=THREAD_NAME
_G["THREAD_ID"]=THREAD_ID
math.randomseed(rand)
func(...)
if _G.__Needs_Multi then
multi:mainloop()
end
THREAD.kill()
end
c.thread=lanes.gen("*", func2)(...)
function c:kill()
--self.status:Destroy()
self.thread:cancel()
print("Thread: '"..self.name.."' has been stopped!")
multi.print("Thread: '"..self.name.."' has been stopped!")
end
c.status=multi:newUpdater(multi.Priority_IDLE)
c.status.link=c
c.status:OnUpdate(function(self)
local v,err,t=self.link.thread:join(.001)
if err then
multi.OnError:Fire(self.link,err,"Error in systemThread: '"..self.link.name.."' <"..err..">")
self:Destroy()
end
end)
table.insert(multi.SystemThreads,c)
c.OnError = multi:newConnection()
GLOBAL["__THREADS__"]=livingThreads
return c
end
print("Integrated Lanes!")
multi.OnSystemThreadDied = multi:newConnection()
function multi.InitSystemThreadErrorHandler()
if started==true then return end
started = true
multi:newThread("ThreadErrorHandler",function()
local threads = multi.SystemThreads
while true do
thread.sleep(.5) -- switching states often takes a huge hit on performance. half a second to tell me there is an error is good enough.
for i=#threads,1,-1 do
local v,err,t=threads[i].thread:join(.001)
if err then
if err:find("Thread was killed!") then
livingThreads[threads[i].Id] = {false,threads[i].Name}
multi.OnSystemThreadDied:Fire(threads[i].Id)
GLOBAL["__THREADS__"]=livingThreads
table.remove(threads,i)
else
threads[i].OnError:Fire(threads[i],err,"Error in systemThread: '"..threads[i].name.."' <"..err..">")
livingThreads[threads[i].Id] = {false,threads[i].Name}
multi.OnSystemThreadDied:Fire(threads[i].Id)
GLOBAL["__THREADS__"]=livingThreads
table.remove(threads,i)
end
end
end
end
end)
end
multi.print("Integrated Lanes!")
multi.integration={} -- for module creators
multi.integration.GLOBAL=GLOBAL
multi.integration.THREAD=THREAD

View File

@ -1,7 +1,7 @@
--[[
MIT License
Copyright (c) 2018 Ryan Ward
Copyright (c) 2019 Ryan Ward
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -34,6 +34,7 @@ multi.integration.love2d.ThreadBase=[[
tab={...}
__THREADID__=table.remove(tab,1)
__THREADNAME__=table.remove(tab,1)
THREAD_ID=table.remove(tab,1)
require("love.filesystem")
require("love.system")
require("love.timer")
@ -167,6 +168,9 @@ end
function sThread.getName()
return __THREADNAME__
end
function sThread.getID()
return THREAD_ID
end
function sThread.kill()
error("Thread was killed!")
end
@ -195,6 +199,7 @@ func=loadDump([=[INSERT_USER_CODE]=])(unpack(tab))
multi:mainloop()
]]
GLOBAL={} -- Allow main thread to interact with these objects as well
_G.THREAD_ID = 0
__proxy__={}
setmetatable(GLOBAL,{
__index=function(t,k)
@ -214,9 +219,13 @@ setmetatable(GLOBAL,{
THREAD={} -- Allow main thread to interact with these objects as well
multi.integration.love2d.mainChannel=love.thread.getChannel("__MainChan__")
isMainThread=true
multi.SystemThreads = {}
function THREAD.getName()
return __THREADNAME__
end
function THREAD.getID()
return THREAD_ID
end
function ToStr(val, name, skipnewlines, depth)
skipnewlines = skipnewlines or false
depth = depth or 0
@ -295,12 +304,19 @@ local function randomString(n)
end
return str
end
local count = 1
local livingThreads = {}
function multi:newSystemThread(name,func,...) -- the main method
multi.InitSystemThreadErrorHandler()
local c={}
c.name=name
c.Name = name
c.ID=c.name.."<ID|"..randomString(8)..">"
c.Id=count
count = count + 1
livingThreads[count] = {true,name}
c.thread=love.thread.newThread(multi.integration.love2d.ThreadBase:gsub("INSERT_USER_CODE",dump(func)))
c.thread:start(c.ID,c.name,...)
c.thread:start(c.ID,c.name,THREAD_ID,...)
function c:kill()
multi.integration.GLOBAL["__DIEPLZ"..self.ID.."__"]="__DIEPLZ"..self.ID.."__"
end
@ -308,7 +324,7 @@ function multi:newSystemThread(name,func,...) -- the main method
end
function love.threaderror( thread, errorstr )
multi.OnError:Fire(thread,errorstr)
print("Error in systemThread: "..tostring(thread)..": "..errorstr)
multi.print("Error in systemThread: "..tostring(thread)..": "..errorstr)
end
local THREAD={}
function THREAD.set(name,val)
@ -333,8 +349,7 @@ end
__channels__={}
multi.integration.GLOBAL=GLOBAL
multi.integration.THREAD=THREAD
updater=multi:newUpdater()
updater:OnUpdate(function(self)
updater=multi:newLoop(function(self)
local data=multi.integration.love2d.mainChannel:pop()
while data do
if type(data)=="string" then
@ -365,8 +380,37 @@ updater:OnUpdate(function(self)
data=multi.integration.love2d.mainChannel:pop()
end
end)
multi.OnSystemThreadDied = multi:newConnection()
local started = false
function multi.InitSystemThreadErrorHandler()
if started==true then return end
started = true
multi:newThread("ThreadErrorHandler",function()
local threads = multi.SystemThreads
while true do
thread.sleep(.5) -- switching states often takes a huge hit on performance. half a second to tell me there is an error is good enough.
for i=#threads,1,-1 do
local v,err,t=threads[i].thread:join(.001)
if err then
if err:find("Thread was killed!") then
livingThreads[threads[i].Id] = {false,threads[i].Name}
multi.OnSystemThreadDied:Fire(threads[i].Id)
GLOBAL["__THREADS__"]=livingThreads
table.remove(threads,i)
else
threads[i].OnError:Fire(threads[i],err,"Error in systemThread: '"..threads[i].name.."' <"..err..">")
livingThreads[threads[i].Id] = {false,threads[i].Name}
multi.OnSystemThreadDied:Fire(threads[i].Id)
GLOBAL["__THREADS__"]=livingThreads
table.remove(threads,i)
end
end
end
end
end)
end
require("multi.integration.shared")
print("Integrated Love2d!")
multi.print("Integrated Love2d!")
return {
init=function(t)
if t then

View File

@ -1,7 +1,7 @@
--[[
MIT License
Copyright (c) 2018 Ryan Ward
Copyright (c) 2019 Ryan Ward
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -114,7 +114,7 @@ local function _INIT(luvitThread,timer)
luvitThread.start(entry,package.path,name,c.func,...)
return c
end
print("Integrated Luvit!")
multi.print("Integrated Luvit!")
multi.integration={} -- for module creators
multi.integration.GLOBAL=GLOBAL
multi.integration.THREAD=THREAD

View File

@ -1,7 +1,7 @@
--[[
MIT License
Copyright (c) 2018 Ryan Ward
Copyright (c) 2019 Ryan Ward
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -23,7 +23,7 @@ SOFTWARE.
]]
local multi = require("multi")
local net = require("net")
require("bin")
local bin = require("bin")
bin.setBitsInterface(infinabits) -- the bits interface does not work so well, another bug to fix
-- Commands that the master and node will respect, max of 256 commands
@ -42,6 +42,7 @@ local CMD_CONSOLE = 0x0B
local char = string.char
local byte = string.byte
-- Process to hold all of the networkManager's muilt objects
-- Helper for piecing commands
local function pieceCommand(cmd,...)
@ -142,17 +143,20 @@ function multi:nodeManager(port)
server.OnDataRecieved(function(server,data,cid,ip,port)
local cmd = data:sub(1,1)
if cmd == "R" then
multi:newTLoop(function(loop)
if server.timeouts[cid]==true then
server.OnNodeRemoved:Fire(server.nodes[cid])
server.nodes[cid] = nil
server.timeouts[cid] = nil
loop:Destroy()
return
multi:newThread("Node Client Manager",function(loop)
while true do
if server.timeouts[cid]==true then
server.OnNodeRemoved:Fire(server.nodes[cid])
server.nodes[cid] = nil
server.timeouts[cid] = nil
thread.kill()
else
server.timeouts[cid] = true
server:send(cid,"ping")
end
thread.sleep(1)
end
server.timeouts[cid] = true
server:send(cid,"ping")
end,1)
end)
server.nodes[cid]=data:sub(2,-1)
server.OnNodeAdded:Fire(server.nodes[cid])
elseif cmd == "G" then
@ -172,6 +176,7 @@ function multi:nodeManager(port)
end
-- The main driving force of the network manager: Nodes
function multi:newNode(settings)
multi:enableLoadDetection()
settings = settings or {}
-- Here we have to use the net library to broadcast our node across the network
math.randomseed(os.time())
@ -189,21 +194,21 @@ function multi:newNode(settings)
node.hasFuncs = {}
node.OnError = multi:newConnection()
node.OnError(function(node,err,master)
print("ERROR",err,node.name)
multi.print("ERROR",err,node.name)
local temp = bin.new()
temp:addBlock(#node.name,2)
temp:addBlock(node.name)
temp:addBlock(#err,2)
temp:addBlock(err)
for i,v in pairs(node.connections) do
print(i)
multi.print(i)
v[1]:send(v[2],char(CMD_ERROR)..temp.data,v[3])
end
end)
if settings.managerDetails then
local c = net:newTCPClient(settings.managerDetails[1],settings.managerDetails[2])
if not c then
print("Cannot connect to the node manager! Ensuring broadcast is enabled!") settings.noBroadCast = false
multi.print("Cannot connect to the node manager! Ensuring broadcast is enabled!") settings.noBroadCast = false
else
c.OnDataRecieved(function(self,data)
if data == "ping" then
@ -215,7 +220,7 @@ function multi:newNode(settings)
end
if not settings.preload then
if node.functions:getSize()~=0 then
print("We have function(s) to preload!")
multi.print("We have function(s) to preload!")
local len = node.functions:getBlock("n",1)
local name,func
while len do
@ -265,14 +270,14 @@ function multi:newNode(settings)
node.queue:push(resolveData(dat))
elseif cmd == CMD_REG then
if not settings.allowRemoteRegistering then
print(ip..": has attempted to register a function when it is currently not allowed!")
multi.print(ip..": has attempted to register a function when it is currently not allowed!")
return
end
local temp = bin.new(dat)
local len = temp:getBlock("n",1)
local name = temp:getBlock("s",len)
if node.hasFuncs[name] then
print("Function already preloaded onto the node!")
multi.print("Function already preloaded onto the node!")
return
end
len = temp:getBlock("n",2)
@ -283,7 +288,7 @@ function multi:newNode(settings)
local temp = bin.new(dat)
local len = temp:getBlock("n",1)
local name = temp:getBlock("s",len)
len = temp:getBlock("n",2)
len = temp:getBlock("n",4)
local args = temp:getBlock("s",len)
_G[name](unpack(resolveData(args)))
elseif cmd == CMD_TASK then
@ -299,13 +304,13 @@ function multi:newNode(settings)
node.OnError:Fire(node,err,server)
end
elseif cmd == CMD_INITNODE then
print("Connected with another node!")
multi.print("Connected with another node!")
node.connections[dat]={server,ip,port}
multi.OnGUpdate(function(k,v)
server:send(ip,table.concat{char(CMD_GLOBAL),k,"|",v},port)
end)-- set this up
elseif cmd == CMD_INITMASTER then
print("Connected to the master!",dat)
multi.print("Connected to the master!",dat)
node.connections[dat]={server,ip,port}
multi.OnGUpdate(function(k,v)
server:send(ip,table.concat{char(CMD_GLOBAL),k,"|",v},port)
@ -352,7 +357,7 @@ function multi:newMaster(settings) -- You will be able to have more than one mas
if settings.managerDetails then
local client = net:newTCPClient(settings.managerDetails[1],settings.managerDetails[2])
if not client then
print("Cannot connect to the node manager! Ensuring broadcast listening is enabled!") settings.noBroadCast = false
multi.print("Warning: Cannot connect to the node manager! Ensuring broadcast listening is enabled!") settings.noBroadCast = false
else
client.OnDataRecieved(function(client,data)
local cmd = data:sub(1,1)
@ -402,7 +407,7 @@ function multi:newMaster(settings) -- You will be able to have more than one mas
temp:addBlock(CMD_CALL,1)
temp:addBlock(#name,1)
temp:addBlock(name,#name)
temp:addBlock(#args,2)
temp:addBlock(#args,4)
temp:addBlock(args,#args)
master:sendTo(node,temp.data)
end
@ -436,12 +441,12 @@ function multi:newMaster(settings) -- You will be able to have more than one mas
name = self:getRandomNode()
end
if name==nil then
multi:newTLoop(function(loop)
if name~=nil then
self:sendTo(name,char(CMD_TASK)..len..aData..len2..fData)
loop:Desrtoy()
end
end,.1)
multi:newEvent(function() return name~=nil end):OnEvent(function(evnt)
self:sendTo(name,char(CMD_TASK)..len..aData..len2..fData)
evnt:Destroy()
end):SetName("DelayedSendTask"):SetName("DelayedSendTask"):SetTime(8):OnTimedOut(function(self)
self:Destroy()
end)
else
self:sendTo(name,char(CMD_TASK)..len..aData..len2..fData)
end
@ -455,12 +460,12 @@ function multi:newMaster(settings) -- You will be able to have more than one mas
name = "NODE_"..name
end
if self.connections[name]==nil then
multi:newTLoop(function(loop)
if self.connections[name]~=nil then
self.connections[name]:send(data)
loop:Destroy()
end
end,.1)
multi:newEvent(function() return self.connections[name]~=nil end):OnEvent(function(evnt)
self.connections[name]:send(data)
evnt:Destroy()
end):SetName("DelayedSendTask"):SetTime(8):OnTimedOut(function(self)
self:Destroy()
end)
else
self.connections[name]:send(data)
end
@ -495,16 +500,19 @@ function multi:newMaster(settings) -- You will be able to have more than one mas
client.OnClientReady(function()
client:send(char(CMD_INITMASTER)..master.name) -- Tell the node that you are a master trying to connect
if not settings.managerDetails then
multi:newTLoop(function(loop)
if master.timeouts[name]==true then
master.timeouts[name] = nil
master.connections[name] = nil
loop:Destroy()
return
multi:newThread("Node Data Link Controller",function(loop)
while true do
if master.timeouts[name]==true then
master.timeouts[name] = nil
master.connections[name] = nil
thread.kill()
else
master.timeouts[name] = true
master:sendTo(name,char(CMD_PING))
end
thread.sleep(1)
end
master.timeouts[name] = true
master:sendTo(name,char(CMD_PING))
end,1)
end)
end
client.name = name
client.OnDataRecieved(function(client,data)
@ -542,7 +550,7 @@ function multi:newMaster(settings) -- You will be able to have more than one mas
return master
end
-- The init function that gets returned
print("Integrated Network Parallelism")
multi.print("Integrated Network Parallelism")
return {init = function()
return GLOBAL
end}

View File

@ -1,7 +1,7 @@
--[[
MIT License
Copyright (c) 2018 Ryan Ward
Copyright (c) 2019 Ryan Ward
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@ -112,30 +112,37 @@ function multi:newSystemThreadedQueue(name) -- in love2d this will spawn a chann
end
return c
end
function multi:newSystemThreadedConnection(name,protect)
local c={}
c.name = name or error("You must provide a name for the connection object!")
c.protect = protect or false
c.idle = nil
local sThread=multi.integration.THREAD
local GLOBAL=multi.integration.GLOBAL
c.name = name or error("You must supply a name for this object!")
c.protect = protect or false
c.count = 0
multi:newSystemThreadedQueue(name.."THREADED_CALLFIRE"):init()
local qsm = multi:newSystemThreadedQueue(name.."THREADED_CALLSYNCM"):init()
local qs = multi:newSystemThreadedQueue(name.."THREADED_CALLSYNC"):init()
local connSync = multi:newSystemThreadedQueue(c.name.."_CONN_SYNC")
local connFire = multi:newSystemThreadedQueue(c.name.."_CONN_FIRE")
function c:init()
local multi = require("multi")
if multi:getPlatform()=="love2d" then
if love then -- lets make sure we don't reference up-values if using love2d
GLOBAL=_G.GLOBAL
sThread=_G.sThread
end
local conns = 0
local qF = sThread.waitFor(self.name.."THREADED_CALLFIRE"):init()
local qSM = sThread.waitFor(self.name.."THREADED_CALLSYNCM"):init()
local qS = sThread.waitFor(self.name.."THREADED_CALLSYNC"):init()
qSM:push("OK")
local conn = {}
conn.obj = multi:newConnection(self.protect)
setmetatable(conn,{__call=function(self,...) return self:connect(...) end})
conn.obj = multi:newConnection()
setmetatable(conn,{
__call=function(self,...)
return self:connect(...)
end
})
local ID = sThread.getID()
local sync = sThread.waitFor(self.name.."_CONN_SYNC"):init()
local fire = sThread.waitFor(self.name.."_CONN_FIRE"):init()
local connections = {}
if not multi.isMainThread then
connections = {0}
end
sync:push{"INIT",ID} -- Register this as an active connection!
function conn:connect(func)
return self.obj(func)
end
@ -146,54 +153,98 @@ function multi:newSystemThreadedConnection(name,protect)
self.obj:Remove()
end
function conn:Fire(...)
local args = {multi.randomString(8),...}
for i = 1, conns do
qF:push(args)
for i = 1,#connections do
fire:push{connections[i],ID,{...}}
end
end
local lastID = ""
local lastCount = 0
multi:newThread("syncer",function()
while true do
thread.skip(1)
local fire = qF:peek()
local count = qS:peek()
if fire and fire[1]~=lastID then
lastID = fire[1]
qF:pop()
table.remove(fire,1)
conn.obj:Fire(unpack(fire))
end
if count and count[1]~=lastCount then
conns = count[2]
lastCount = count[1]
qs:pop()
function conn:FireTo(to,...)
local good = false
for i = 1,#connections do
if connections[i]==to then
good = true
break
end
end
end)
if not good then return multi.print("NonExisting Connection!") end
fire:push{to,ID,{...}}
end
-- FIRE {TO,FROM,{ARGS}}
local data
local clock = os.clock
conn.OnConnectionAdded = multi:newConnection()
multi:newLoop(function()
data = fire:peek()
if type(data)=="table" and data[1]==ID then
if data[2]==ID and conn.IgnoreSelf then
fire:pop()
return
end
fire:pop()
conn.obj:Fire(unpack(data[3]))
end
data = sync:peek()
if data~=nil and data[1]=="SYNCA" and data[2]==ID then
sync:pop()
multi.nextStep(function()
conn.OnConnectionAdded:Fire(data[3])
end)
table.insert(connections,data[3])
end
if type(data)=="table" and data[1]=="SYNCR" and data[2]==ID then
sync:pop()
for i=1,#connections do
if connections[i] == data[3] then
table.remove(connections,i)
end
end
end
end):setName("STConn.syncer")
return conn
end
multi:newThread("connSync",function()
local cleanUp = {}
multi.OnSystemThreadDied(function(ThreadID)
for i=1,#syncs do
connSync:push{"SYNCR",syncs[i],ThreadID}
end
cleanUp[ThreadID] = true
end)
multi:newThread(c.name.." Connection-Handler",function()
local data
local clock = os.clock
local syncs = {}
while true do
thread.skip(1)
local syncIN = qsm:pop()
if syncIN then
if syncIN=="OK" then
c.count = c.count + 1
else
c.count = c.count - 1
if not c.idle then
thread.sleep(.5)
else
if clock() - c.idle >= 15 then
c.idle = nil
end
local rand = math.random(1,1000000)
for i = 1, c.count do
qs:push({rand,c.count})
thread.skip()
end
data = connSync:peek()
if data~= nil and data[1]=="INIT" then
connSync:pop()
c.idle = clock()
table.insert(syncs,data[2])
for i=1,#syncs do
connSync:push{"SYNCA",syncs[i],data[2]}
end
end
data = connFire:peek()
if data~=nil and cleanUp[data[1]] then
local meh = data[1]
connFire:pop() -- lets remove dead thread stuff
multi:newAlarm(15):OnRing(function(a)
cleanUp[meh] = nil
end)
end
end
end)
GLOBAL[name]=c
GLOBAL[c.name]=c
return c
end
function multi:systemThreadedBenchmark(n)
function multi:SystemThreadedBenchmark(n)
n=n or 1
local cores=multi.integration.THREAD.getCores()
local queue=multi:newSystemThreadedQueue("THREAD_BENCH_QUEUE"):init()
@ -211,6 +262,7 @@ function multi:systemThreadedBenchmark(n)
multi:benchMark(n):OnBench(function(self,count)
queue:push(count)
sThread.kill()
error("Thread was killed!")
end)
multi:mainloop()
end,n)
@ -240,6 +292,7 @@ function multi:newSystemThreadedConsole(name)
local sThread=multi.integration.THREAD
local GLOBAL=multi.integration.GLOBAL
function c:init()
_G.__Needs_Multi = true
local multi = require("multi")
if multi:getPlatform()=="love2d" then
GLOBAL=_G.GLOBAL
@ -247,10 +300,10 @@ function multi:newSystemThreadedConsole(name)
end
local cc={}
if multi.isMainThread then
if GLOBAL["__SYSTEM_CONSLOE__"] then
cc.stream = sThread.waitFor("__SYSTEM_CONSLOE__"):init()
if GLOBAL["__SYSTEM_CONSOLE__"] then
cc.stream = sThread.waitFor("__SYSTEM_CONSOLE__"):init()
else
cc.stream = multi:newSystemThreadedQueue("__SYSTEM_CONSLOE__"):init()
cc.stream = multi:newSystemThreadedQueue("__SYSTEM_CONSOLE__"):init()
multi:newLoop(function()
local data = cc.stream:pop()
if data then
@ -261,10 +314,10 @@ function multi:newSystemThreadedConsole(name)
print(unpack(data))
end
end
end)
end):setName("ST.consoleSyncer")
end
else
cc.stream = sThread.waitFor("__SYSTEM_CONSLOE__"):init()
cc.stream = sThread.waitFor("__SYSTEM_CONSOLE__"):init()
end
function cc:write(msg)
self.stream:push({"w",tostring(msg)})
@ -281,12 +334,14 @@ function multi:newSystemThreadedConsole(name)
GLOBAL[c.name]=c
return c
end
-- NEEDS WORK
function multi:newSystemThreadedTable(name)
local c={}
c.name=name -- set the name this is important for identifying what is what
local sThread=multi.integration.THREAD
local GLOBAL=multi.integration.GLOBAL
function c:init() -- create an init function so we can mimic on both love2d and lanes
_G.__Needs_Multi = true
local multi = require("multi")
if multi:getPlatform()=="love2d" then
GLOBAL=_G.GLOBAL
@ -324,14 +379,16 @@ function multi:newSystemThreadedTable(name)
return c
end
local jobqueuecount = 0
local jqueues = {}
function multi:newSystemThreadedJobQueue(a,b)
jobqueuecount=jobqueuecount+1
local GLOBAL=multi.integration.GLOBAL
local sThread=multi.integration.THREAD
local c = {}
c.numberofcores = 4
c.idle = nil
c.name = "SYSTEM_THREADED_JOBQUEUE_"..jobqueuecount
-- This is done to keep backwards compatability for older code
-- This is done to keep backwards compatibility for older code
if type(a)=="string" and not(b) then
c.name = a
elseif type(a)=="number" and not (b) then
@ -343,6 +400,10 @@ function multi:newSystemThreadedJobQueue(a,b)
c.name = b
c.numberofcores = a
end
if jqueues[c.name] then
error("A job queue by the name: "..c.name.." already exists!")
end
jqueues[c.name] = true
c.isReady = false
c.jobnum=1
c.OnJobCompleted = multi:newConnection()
@ -359,6 +420,7 @@ function multi:newSystemThreadedJobQueue(a,b)
end
c.tempQueue = {}
function c:pushJob(name,...)
c.idle = os.clock()
if not self.isReady then
table.insert(c.tempQueue,{self.jobnum,name,...})
self.jobnum=self.jobnum+1
@ -370,8 +432,9 @@ function multi:newSystemThreadedJobQueue(a,b)
end
end
function c:doToAll(func)
local r = multi.randomString(12)
for i = 1, self.numberofcores do
queueDA:push{multi.randomString(12),func}
queueDA:push{r,func}
end
end
for i=1,c.numberofcores do
@ -425,12 +488,9 @@ function multi:newSystemThreadedJobQueue(a,b)
end
end
end)
multi:newThread("Idler",function()
while true do
if os.clock()-lastjob>1 then
sThread.sleep(.1)
end
thread.sleep(.001)
multi:newLoop(function()
if os.clock()-lastjob>1 then
sThread.sleep(.1)
end
end)
setmetatable(_G,{
@ -443,11 +503,11 @@ function multi:newSystemThreadedJobQueue(a,b)
end
end,c.name)
end
multi:newThread("counter",function()
print("thread started")
local clock = os.clock
multi:newThread("JQ-"..c.name.." Manager",function()
local _count = 0
while _count<c.numberofcores do
thread.skip(1)
thread.skip()
if queueCC:pop() then
_count = _count + 1
end
@ -460,9 +520,17 @@ function multi:newSystemThreadedJobQueue(a,b)
c.OnReady:Fire(c)
local dat
while true do
thread.skip(1)
if not c.idle then
thread.sleep(.5)
else
if clock() - c.idle >= 15 then
c.idle = nil
end
thread.skip()
end
dat = queueJD:pop()
if dat then
c.idle = clock()
c.OnJobCompleted:Fire(unpack(dat))
end
end

View File

@ -16,7 +16,6 @@ dependencies = {
"lua >= 5.1",
"bin",
"lanes",
"lua-net"
}
build = {
type = "builtin",

View File

@ -0,0 +1,31 @@
package = "multi"
version = "13.0-0"
source = {
url = "git://github.com/rayaman/multi.git",
tag = "v13.0.0",
}
description = {
summary = "Lua Multi tasking library",
detailed = [[
This library contains many methods for multi tasking. From simple side by side code using multi-objs, to using coroutine based Threads and System threads(When you have lua lanes installed or are using love2d)
]],
homepage = "https://github.com/rayaman/multi",
license = "MIT"
}
dependencies = {
"lua >= 5.1",
"bin",
"lanes",
}
build = {
type = "builtin",
modules = {
["multi"] = "multi/init.lua",
["multi.compat.love2d"] = "multi/compat/love2d.lua",
["multi.integration.lanesManager"] = "multi/integration/lanesManager.lua",
["multi.integration.loveManager"] = "multi/integration/loveManager.lua",
["multi.integration.luvitManager"] = "multi/integration/luvitManager.lua",
["multi.integration.networkManager"] = "multi/integration/networkManager.lua",
["multi.integration.shared"] = "multi/integration/shared.lua"
}
}

12
sample-nodeManager.lua Normal file
View File

@ -0,0 +1,12 @@
package.path="?/init.lua;?.lua;"..package.path
multi = require("multi")
local GLOBAL, THREAD = require("multi.integration.lanesManager").init()
nGLOBAL = require("multi.integration.networkManager").init()
multi:nodeManager(12345) -- Host a node manager on port: 12345
print("Node Manager Running...")
settings = {
priority = 0, -- 1 or 2
protect = false,
}
multi:mainloop(settings)
-- Thats all you need to run the node manager, everything else is done automatically

View File

@ -1,36 +1,38 @@
package.path="?/init.lua;?.lua;"..package.path
multi = require("multi")
local GLOBAL, THREAD = require("multi.integration.lanesManager").init()
conn = multi:newSystemThreadedConnection("test"):init()
multi:newSystemThread("Work",function()
local multi = require("multi")
conn = THREAD.waitFor("test"):init()
conn(function(...)
print(...)
end)
multi:newTLoop(function()
conn:Fire("meh2")
end,1)
multi:mainloop()
end)
multi.OnError(function(a,b,c)
print(c)
end)
multi:newTLoop(function()
conn:Fire("meh")
end,1)
conn(function(...)
print(">",...)
end)
--~ jq = multi:newSystemThreadedJobQueue()
--~ jq:registerJob("test",function(a)
--~ return "Hello",a
--~ end)
--~ jq.OnJobCompleted(function(ID,...)
--~ print(ID,...)
--~ end)
--~ for i=1,16 do
--~ jq:pushJob("test",5)
local GLOBAL,THREAD = require("multi.integration.lanesManager").init()
nGLOBAL = require("multi.integration.networkManager").init()
--~ local a
--~ local clock = os.clock
--~ function sleep(n) -- seconds
--~ local t0 = clock()
--~ while clock() - t0 <= n do end
--~ end
multi:mainloop()
--~ master = multi:newMaster{
--~ name = "Main", -- the name of the master
--~ noBroadCast = true, -- if using the node manager, set this to true to avoid double connections
--~ managerDetails = {"localhost",12345}, -- the details to connect to the node manager (ip,port)
--~ }
--~ master.OnError(function(name,err)
--~ print(name.." has encountered an error: "..err)
--~ end)
--~ local connlist = {}
--~ multi:newThread("NodeUpdater",function()
--~ while true do
--~ thread.sleep(1)
--~ for i=1,#connlist do
--~ master:execute("TASK_MAN",connlist[i], multi:getTasksDetails())
--~ end
--~ end
--~ end)
--~ master.OnNodeConnected(function(name)
--~ print("Connected to the node")
--~ table.insert(connlist,name)
--~ end)
--~ multi.OnError(function(...)
--~ print(...)
--~ end)
multi:mainloop{
protect = false,
print = true
}