Changelog

Table of contents

Update 14.1.0 - A whole new world of possibilities
Update 14.0.0 - Consistency, Additions and Stability
Update 13.1.0 - Bug fixes and features added
Update 13.0.0 - Added some documentation, and some new features too check it out!
Update 12.2.2 - Time for some more bug fixes!
Update 12.2.1 - Time for some bug fixes!
Update 12.2.0 - The chains of binding
Update 12.1.0 - Threads just can't hold on anymore
Update: 12.0.0 - Big update (Lots of additions some changes)
Update: 1.11.1 - Small Clarification on Love
Update: 1.11.0
Update: 1.10.0
Update: 1.9.2
Update: 1.9.1 - Threads can now argue
Update: 1.9.0
Update: 1.8.7
Update: 1.8.6
Update: 1.8.5
Update: 1.8.4
Update: 1.8.3 - Mainloop recieves some needed overhauling
Update: 1.8.2
Update: 1.8.1
Update: 1.7.6
Update: 1.7.5
Update: 1.7.4
Update: 1.7.3
Update: 1.7.2
Update: 1.7.1 - Bug Fixes Only
Update: 1.7.0 - Threading the systems
Update: 1.6.0
Update: 1.5.0
Update: 1.4.1 (4/10/2017) - First Public release of the library
Update: 1.4.0 (3/20/2017)
Update: 1.3.0 (1/29/2017)
Update: 1.2.0 (12.31.2016)
Update: 1.1.0
Update: 1.0.0
Update: 0.6.3
Update: 0.6.2
Update: 0.6.1-6
Update: 0.5.1-6
Update: 0.4.1
Update: 0.3.0 - The update that started it all
Update: EventManager 2.0.0
Update: EventManager 1.2.0
Update: EventManager 1.1.0
Update: EventManager 1.0.0 - Error checking
Version: EventManager 0.0.1 - In The Beginning things were very different

Update 14.1.0 - A whole new world of possibilities

Full Update Showcase

Something I plan on doing each version going forward

package.path="?/init.lua;?.lua;"..package.path multi,thread = require("multi"):init() local GLOBAL, THREAD = require("multi.integration.lanesManager"):init() t = THREAD:newFunction(function(...) print("This is a system threaded function!",...) THREAD.sleep(1) -- This is handled within a system thread! Note: this creates a system thread that runs then ends. C return "We done!" end) print(t("hehe",1,2,3,true).connect(function(...) print("connected:",...) end)) -- The same features that work with thread:newFunction() are here as well multi.OnLoad(function() print("Code Loaded!") -- Connect to the load event end) t = os.clock() co = 0 multi.OnExit(function(n) print("Code Exited: ".. os.clock()-t .." Count: ".. co) -- Lets print when things have ended end) test = thread:newFunction(function() thread.sleep(1) -- Internally this throws a yield call which sends to the scheduler to sleep 1 second for this thread! return 1,math.random(2,100) end) multi:newThread(function() while true do thread.skip() -- Even though we have a few metamethods "yielding" I used this as an example of things still happening and counting. It connects to the Code Exited event later on. co = co + 1 end end) -- We can get around the yielding across metamethods by using a threadedFunction -- For Example example = {} setmetatable(example,{ __newindex = function(t,k,v) -- Using a threaded function inside of a normal function print("Inside metamethod",t,k,v) local a,b = test().wait() -- This function holds the code and "yields" see comment inside the test function! -- we should see a 1 seconde delay since the function sleeps for a second than returns print("We did it!",a,b) rawset(t,k,v) -- This means by using a threaded function we can get around the yielding across metamethods. -- This is useful if you aren't using luajit, or if you using lua in an enviroment that is on version 5.1 -- There is a gotcha however, if using code that was meant to work with another coroutine based scheduler this may not work end, __index = thread:newFunction(function(t,k,v) -- Using a threaded function as the metamethod -- This works by returning a table with a __call metamethod. Will this work? Will lua detect this as a function or a table? thread.sleep(1) return "You got a string" end,true) -- Tell the code to force a wait and to identify as a function. We need to do this for metamethods -- If we don't pass true this is a table with a __call metamethod }) example["test"] = "We set a variable!" print(example["test"]) print(example.hi) -- When not in a threaded enviroment at root level we need to tell the code that we are waiting! Alternitavely after the function argument we can pass true to force a wait c,d = test().wait() print(c,d) a,b = 6,7 multi:newThread(function() -- a,b = test().wait() -- Will modify Global -- when wait is used the special metamethod routine is not triggered and variables are set as normal a,b = test() -- Will modify GLocal -- the threaded function test triggers a special routine within the metamethod that alters the thread's enviroment instead of the global enviroment. print("Waited:",a,b) --This returns instantly even though the function isn't done! test().connect(function(a,b) print("Connected:",a,b) os.exit() end) -- This waits for the returns since we are demanding them end) multi:mainloop()

Changed:

Added:

package.path="?/init.lua;?.lua;"..package.path multi,thread = require("multi"):init() multi.OnExit(function(n) print("Code Exited") end) sdf() -- Non existing function being called to trigger an error
package.path="?/init.lua;?.lua;"..package.path multi,thread = require("multi"):init() multi.OnExit(function(n) print("Code Exited") end) -- The code finishing also triggers this event
package.path="?/init.lua;?.lua;"..package.path multi,thread = require("multi"):init() a,b = 6,7 multi:newThread(function() function test() -- Auto converts your function into a threaded function thread.sleep(1) return 1,2 end -- a,b = test().wait() -- Will modify Global -- when wait is used the special metamethod routine is not triggered and variables are set as normal a,b = test() -- Will modify GLocal -- the threaded function test triggers a special routine within the metamethod that alters the thread's enviroment instead of the global enviroment. print("Waited:",a,b) --This returns instantly even though the function isn't done! test().connect(function(a,b) print("Connected:",a,b) os.exit() end) -- This waits for the returns since we are demanding them end) multi:newAlarm(2):OnRing(function() print(a,b) end) multi:mainloop()

Usage

local multi,thread = require("multi"):init() multi:scheduleJob({min = 15, hour = 14},function() -- This function will be called once everyday at 2:15 -- Using a combination of the values above you are able to schedule a time end) multi:mainloop()

Removed:

Fixed:

Update 14.0.0 - Consistency, Additions and Stability


Added:

package.path = "./?/init.lua;"..package.path local multi, thread = require("multi"):init() conn = multi:newConnection() multi:newThread(function() thread.hold(conn) print("Connection Fired!!!") end) multi:newAlarm(3):OnRing(function() conn:Fire() end)

thread newFunction

func=thread:newFunction(function(...) print("Function running...") thread.sleep(1) return {1,2,3},"done" end) multi:newThread("Test",function() func().connect(function(...) print(...) end) end) ----OUTPUT---- > Function running... > table: 0x008cf340 done nil nil nil nil nil

thread newFunction using auto convert

package.path = "./?/init.lua;" .. package.path multi, thread = require("multi").init() a=5 multi:newThread("Test",function() function hmm() -- Auto converted into a threaded function return "Hello!",2 end print(a) a=10 print(hmm().wait()) end) multi:newAlarm(3):OnRing(function() print(a) end) print(hmm) multi:mainloop() -----OUTPUT----- > nil > 5 > Hello! 2 nil nil nil nil nil -- The way I manage function returns is by allocating them to predefined locals. Because I pass these values regardless they technically get passed even when they are nil. This choice was made to keep the creation of tables to capture arguments then using unpack to pass them on when processing is done > 10

Fixed:

Removed:

Changed:

package.path = "./?/init.lua;"..package.path local multi, thread = require("multi").init() test = multi:newConnection() test(function(a) print("test 1",a.Temp) a.Temp = "No!" end,function(a) print("test 2",a.Temp) a.Temp = "Maybe!" end,function(a) print("test 3",a.Temp) end) test:Fire({Temp="Yes!"})
local multi, thread = require("multi").init() -- The require multi function still returns the multi object like before

Note: Using init allows you to get access to the thread handle. This was done because thread was modifying the global space as well as multi. I wanted to not modify the global space anymore. internally most of your code can stay the same, you only need to change how the library is required. I do toy a bit with the global space, buy I use a variable name that is invalid as a variable name. The variable name is $multi. This is used internally to keep some records and maintain a clean space

Also when using intergrations things now look more consistant.

local multi, thread = require("multi").init() local GLOBSL, THREAD = require("multi.integration.lanesManager").init() -- or whichever manager you are using local nGLOBAL, nTHREAD = require("multi.intergration.networkManager).inti()

Note: You can mix and match integrations together. You can create systemthreads within network threads, and you can also create cotoutine based threads within bothe network and system threads. This gives you quite a bit of flexibility to create something awesome.

Going forward:

Update 13.1.0 - Bug fixes and features added


Added:

Fixed:

Changed:

Tasks Details Table format

{ ["Tasks"] = { { ["TID"] = 0, ["Type"] = scheduler, ["Name"] = multi.thread, ["Priority"] = Core, ["Uptime"] = 6.752 ["Link"] = tableRef }, ... } , ["Systemthreads"] = { { ["Uptime"] = 6.752 ["Link"] = tableRef ["Name"] = threadname ["ThreadID"] = 0 }, ... }, ["Threads"] = { { ["Uptime"] = 6.752 ["Link"] = tableRef ["Name"] = threadname ["ThreadID"] = 0 }, ... }, ["ProcessName"] = multi.root, ["CyclesPerSecondPerTask"] = 3560300, ["MemoryUsage"] = 1846, in KB returned as a number ["ThreadCount"] = 1, ["SystemLoad"] = 0, as a % 100 is max 0 is min ["PriorityScheme"] = Round-Robin ["SystemThreadCount"] = 1 }

Update 13.0.0 - Added some documentation, and some new features too check it out!


Quick note on the 13.0.0 update: This update I went all in finding bugs and improving proformance within the library. I added some new features and the new task manager, which I used as a way to debug the library was a great help, so much so thats it is now a permanent feature. It's been about half a year since my last update, but so much work needed to be done. I hope you can find a use in your code to use my library. I am extremely proud of my work; 7 years of development, I learned so much about lua and programming through the creation of this library. It was fun, but there will always be more to add and bugs crawling there way in. I can't wait to see where this library goes in the future!

Fixed:

Changed:

Connection Example:

loop = multi:newTLoop(function(self) self:OnLoops() -- new way to Fire a connection! Only works when used on a multi object, bin objects, or any object that contains a Type variable end,1) loop.OnLoops = multi:newConnection() loop.OnLoops(function() print("Looping") end) multi:mainloop()

Function Example:

func = multi:newFunction(function(self,a,b) self:Pause() return 1,2,3 end) print(func()) -- returns: 1, 2, 3 print(func()) -- nil, true

Removed:

These didn't have much use in their previous form, but with the addition of hyper threaded processes the goals that these objects aimed to solve are now possible using a process

Fixed:

Added:

package.path="?/init.lua;?.lua;"..package.path local multi = require("multi") conn = multi:newConnector() conn.OnTest = multi:newConnection() conn.OnTest(function() print("Yes!") end) test = multi:newHyperThreadedProcess("test") test:newTLoop(function() print("HI!") conn:OnTest() end,1) test:newLoop(function() print("HEY!") thread.sleep(.5) end) multi:newAlarm(3):OnRing(function() test:Sleep(10) end) test:Start() multi:mainloop()

Table format for getTasksDetails(STRING format)

{ {TID = 1,Type="",Priority="",Uptime=0} {TID = 2,Type="",Priority="",Uptime=0} ... {TID = n,Type="",Priority="",Uptime=0} ThreadCount = 0 threads={ [Thread_Name]={ Uptime = 0 } } }

Note: After adding the getTasksDetails() function I noticed many areas where threads, and tasks were not being cleaned up and fixed the leaks. I also found out that a lot of tasks were starting by default and made them enable only. If you compare the benchmark from this version to last version you;ll notice a signifacant increase in performance.

Going forward:

Update 12.2.2 - Time for some more bug fixes!


Fixed:

Update 12.2.1 - Time for some bug fixes!


Fixed: SystemThreadedJobQueues

Fixed: SystemThreadedConnection

Removed: multi:newQueuer

Going forward:

Example

package.path="?/init.lua;?.lua;"..package.path multi = require("multi") GLOBAL, THREAD = require("multi.integration.lanesManager").init() jq = multi:newSystemThreadedJobQueue() jq:registerJob("test",function(a) return "Hello",a end) jq.OnJobCompleted(function(ID,...) print(ID,...) end) for i=1,16 do jq:pushJob("test",5) end multi:mainloop()

Update 12.2.0 - The chains of binding

Added:

-- All methods that did not return before now return a copy of itself. Thus allowing chaining. Most if not all mutators returned nil, so chaining can now be done. I will eventually write up a full documentation of everything which will show this. multi = require("multi") multi:newStep(1,100):OnStep(function(self,i) print("Index: "..i) end):OnEnd(function(self) print("Step is done!") end) multi:mainloop{ priority = 3 }

Priority 3 works a bit differently than the other 2.

P1 follows a forumla that resembles this: ~n=I*PRank where n is the amount of steps given to an object with PRank and where I is the idle time see chart below. The aim of this priority scheme was to make core objects run fastest while letting idle processes get decent time as well.

C: 3322269 ~I*7 H: 2847660 ~I*6 A: 2373050 ~I*5 N: 1898440 ~I*4 B: 1423830 ~I*3 L: 949220 ~I*2 I: 474610 ~I ~n=I*PRank

P2 follows a formula that resembles this: ~n=n*4 where n is the idle time, see chart below. The goal of this one was to make core process' higher while keeping idle process' low.

C: 6700821 H: 1675205 A: 418801 N: 104700 B: 26175 L: 6543 I: 1635 ~n=n*4

P3 Ignores using a basic funceion and instead bases its processing time on the amount of cpu time is there. If cpu-time is low and a process is set at a lower priority it will get its time reduced. There is no formula, at idle almost all process work at the same speed!

C: 2120906 H: 2120906 A: 2120906 N: 2120906 B: 2120906 L: 2120906 I: 2120506

Auto Priority works by seeing what should be set high or low. Due to lua not having more persicion than milliseconds, I was unable to have a detailed manager that can set things to high, above normal, normal, ect. This has either high or low. If a process takes longer than .001 millisecond it will be set to low priority. You can change this by using the setting auto_lowest = multi.Priority_[PLevel] the defualt is low, not idle, since idle tends to get about 1 process each second though you can change it to idle using that setting.

Improved:

I usually give an example of the changes made, but this time I have an explantion for multi.nextStep(). It's not an entirely new feature since multi:newJob() does something like this, but is completely different. nextStep addes a function that is executed first on the next step. If multiple things are added to next step, then they will be executed in the order that they were added.

Note: The upper limit of this libraries performance on my machine is ~39mil. This is simply a while loop counting up from 0 and stops after 1 second. The 20mil that I am currently getting is probably as fast as it can get since its half of the max performance possible, and each layer I have noticed that it doubles complexity. Throughout the years with this library I have seen massive improvements in speed. In the beginning we had only ~2000 steps per second. Fast right? then after some tweaks we went to about 300000 steps per second, then 600000. Some more tweaks brought me to ~1mil steps per second, then to ~4 mil then ~9 mil and now finally ~20 mil... the doubling effect that i have now been seeing means that odds are I have reach the limit. I will aim to add more features and optimize individule objects. If its possible to make the library even faster then I will go for it.

Update 12.1.0 - Threads just can't hold on anymore

Fixed:

Changed:

package.path="?/init.lua;?.lua;"..package.path multi = require("multi") local a = 0 multi:newThread("test",function() print("lets go") b,c = thread.hold(function() -- This now returns what was managed here return b,"We did it!" end) print(b,c) end) multi:newTLoop(function() a=a+1 if a == 5 then b = "Hello" end end,1) multi:mainloop()

Note: Only if the first return is non-nil/false will any other returns be passed! So while variable b above is nil the string "We did it!" will not be passed. Also while this seems simple enough to get working, I had to modify a bit on how the scheduler worked to add such a simple feature. Quite a bit is going on behind the scenes which made this a bit tricky to implement, but not hard. Just needed a bit of tinkering. Plus event objects have not been edited since the creation of the EventManager. They have remained mostly the same since 2011

Going forward:

Contunue to make small changes as I come about them. This change was inspired when working of the net library. I was addind simple binary file support over tcp, and needed to pass the data from the socket when the requested amount has been recieved. While upvalues did work, i felt returning data was cleaner and added this feature.

Update: 12.0.0 - Big update (Lots of additions some changes)

Note: After doing some testing, I have noticed that using multi-objects are slightly, quite a bit, faster than using (coroutines)multi:newthread(). Only create a thread if there is no other possibility! System threads are different and will improve performance if you know what you are doing. Using a (coroutine)thread as a loop with a is slower than using a TLoop! If you do not need the holding features I strongly recommend that you use the multi-objects. This could be due to the scheduler that I am using, and I am looking into improving the performance of the scheduler for (coroutine)threads. This is still a work in progress so expect things to only get better as time passes! This was the reason threadloop was added. It binds the thread scheduler into the mainloop allowing threads to run much faster than before. Also the use of locals is now possible since I am not dealing with seperate objects. And finally, reduced function overhead help keeps the threads running better.

Note: The nodeManager is being reworked! This will take some time before it is in a stable state. The old version had some major issues that caused it to perform poorly.

Note: Version names were brought back to reality this update. When transistioning from EventManager to multi I stopped counting when in reality it was simply an overhaul of the previous library

Added:

Changed:

Node:

Master:

Bugs

Going forward:

Note On Queues: When it comes to network queues, they only send 1 way. What I mean by that is that if the master sends a message to a node, its own queue will not get populated at all. The reason for this is because syncing between which popped from what network queue would make things really slow and would not perform well at all. This means you have to code a bit differently. Use: master getFreeNode() to get the name of the node under the least amount of load. Then handle the sending of data to each node that way.

Now there is a little trick you can do. If you combine both networkmanager and systemthreading manager, then you could have a proxy queue for all system threads that can pull from that "node". Now data passing within a lan network, (And wan network if using the node manager, though p2p isn't working as i would like and you would need to open ports and make things work. Remember you can define an port for your node so you can port forward that if you want), is fast enough, but the waiting problem is something to consider. Ask yourseld what you are coding and if network paralisim is worth using.

Note: These examples assume that you have already connected the nodes to the node manager. Also you do not need to use the node manager, but sometimes broadcast does not work as expected and the master doesnot connect to the nodes. Using the node manager offers nice features like: removing nodes from the master when they have disconnected, and automatically telling the master when nodes have been added. A more complete example showing connections regardless of order will be shown in the example folder check it out. New naming scheme too.

NodeManager.lua

package.path="?/init.lua;?.lua;"..package.path multi = require("multi") local GLOBAL, THREAD = require("multi.integration.lanesManager").init() nGLOBAL = require("multi.integration.networkManager").init() multi:nodeManager(12345) -- Host a node manager on port: 12345 print("Node Manager Running...") settings = { priority = 0, -- 1 or 2 protect = false, } multi:mainloop(settings) -- Thats all you need to run the node manager, everything else is done automatically

Side note: I had a setting called cross talk that would allow nodes to talk to each other. After some tought I decided to not allow nodes to talk to each other directly! You however can create another master withing the node. (The node will connect to its own master as well). This will give you the ability "Cross talk" with each node. Reimplementing the master features into each node directly was unnecessary.

Node.lua

package.path="?/init.lua;?.lua;"..package.path multi = require("multi") local GLOBAL, THREAD = require("multi.integration.lanesManager").init() nGLOBAL = require("multi.integration.networkManager").init() master = multi:newNode{ allowRemoteRegistering = true, -- allows you to register functions from the master on the node, default is false name = nil, -- default value noBroadCast = true, -- if using the node manager, set this to true to prevent the node from broadcasting managerDetails = {"localhost",12345}, -- connects to the node manager if one exists } function RemoteTest(a,b,c) -- a function that we will be executing remotely print("Yes I work!",a,b,c) end settings = { priority = 0, -- 1 or 2 protect = false, -- if something goes wrong we will crash hard, but the speed gain is good } multi:mainloop(settings)

Master.lua

-- set up the package package.path="?/init.lua;?.lua;"..package.path -- Import the libraries multi = require("multi") local GLOBAL, THREAD = require("multi.integration.lanesManager").init() nGLOBAL = require("multi.integration.networkManager").init() -- Act as a master node master = multi:newMaster{ name = "Main", -- the name of the master noBroadCast = true, -- if using the node manager, set this to true to avoid double connections managerDetails = {"localhost",12345}, -- the details to connect to the node manager (ip,port) } -- Send to all the nodes that are connected to the master master:doToAll(function(node_name) master:register("TestFunc",node_name,function(msg) print("It works: "..msg) end) multi:newAlarm(2):OnRing(function(alarm) master:execute("TestFunc",node_name,"Hello!") alarm:Destroy() end) multi:newThread("Checker",function() while true do thread.sleep(1) if nGLOBAL["test"] then print(nGLOBAL["test"]) thread.kill() end end end) nGLOBAL["test2"]={age=22} end) -- Starting the multitasker settings = { priority = 0, -- 0, 1 or 2 protect = false, } multi:mainloop(settings)

Note: There are many ways to work this. You could send functions/methods to a node like haw systemThreadedJobQueue work. Or you could write the methods you want in advance in each node file and send over the command to run the method with arguments ... and it will return the results. Network threading is different than system threading. Data transfer is really slow compared to system threading. In fact the main usage for this feature in the library is mearly for experments. Right now I honestly do not know what I want to do with this feature and what I am going to add to this feature. The ablitiy to use this frature like a system thread will be possible, but there are some things that differ.

Changed:

Modifying the global stack is not the best way to manage or load in the library.

-- Base Library multi = require("multi") -- In Lanes multi = require("multi") local GLOBAL, THREAD = require("multi.integration.lanesManager").init() -- In Love2d multi = require("multi") GLOBAL, THREAD = require("multi.integration.loveManager").init() -- In Luvit local timer = require("timer") local thread = require("thread") multi = require("multi") require("multi.integration.luvitManager").init(thread,timer) -- Luvit does not cuttently have support for the global table or threads.

Improvements:

Removed:

The new settings table makes all of these possible and removes a lot of function overhead that was going on before.

multi:mainloop{ priority = 1, -- 1 or 2 protect = true, -- Should I use pcall to ignore errors? preLoop = function(self) -- a function that is called before the mainloop does its thing multi:newTLoop(function() print("Hello whats up!") error(":P") end,1) multi.OnError(function(obj,err) print(err) obj:Destroy() end) end, }

Update: 1.11.1 - Small Clarification on Love

Love2d change: I didn't fully understand how the new love.run() function worked. So, it works by returning a function that allows updating the mainloop. So, this means that we can do something like this:

multi:newLoop(love.run()) -- Run the mainloop here, cannot use thread.* when using this object -- or multi:newThread("MainLoop",love.run()) -- allows you to use the thread.* --And you'll need to throw this in at the end multi:mainloop()

The priority management system should be quite useful with this change. NOTE: multiobj:hold() will be removed in the next version! This is something I feel should be changed, since threads(coroutines) do the job great, and way better than my holding method that I throw together 5 years ago. I doubt this is being used by many anyway. Version 1.11.2 or version 2.0.0 will have this change. The next update may be either, bug fixes if any or network parallelism.

TODO: Add auto priority adjustments when working with priority and stuff... If the system is under heavy load it will dial some things deemed as less important down and raise the core processes.

Update: 1.11.0

Added:

-- MainThread: console = multi:newSystemThreadedConsole("console"):init() -- Thread: console = THREAD.waitFor("console"):init() -- using the console console:print(...) console:write(...) -- kind of useless for formatting code though. other threads can eaisly mess this up.

Fixed/Updated:

function love.update(dt) multi:uManager(dt) -- runs the main loop of the multitasking library end function love.draw() multi.dManager() -- If using my guimanager, if not omit this end

Update: 1.10.0

Note: The library is now considered to be stable! Upcoming: Network parallelism is on the way. It is in the works and should be released soon

Added:

Example of threaded connections

package.path="?/init.lua;?.lua;"..package.path local GLOBAL,THREAD=require("multi.integration.lanesManager").init() multi:newSystemThread("Test_Thread_1",function() connOut = THREAD.waitFor("ConnectionNAMEHERE"):init() connOut(function(arg) print(THREAD.getName(),arg) end) multi:mainloop() end) multi:newSystemThread("Test_Thread_2",function() connOut = THREAD.waitFor("ConnectionNAMEHERE"):init() connOut(function(arg) print(THREAD.getName(),arg) end) multi:mainloop() end) connOut = multi:newSystemThreadedConnection("ConnectionNAMEHERE"):init() a=0 connOut(function(arg) print("Main",arg) end) multi:newTLoop(function() a=a+1 connOut:Fire("Test From Main Thread: "..a.."\n") end,1)

Fixed:

loveManager and shared threading objects

Example of threaded tables

package.path="?/init.lua;?.lua;"..package.path local GLOBAL,sThread=require("multi.integration.lanesManager").init() multi:newSystemThread("Test_Thread_1",function() require("multi") test = sThread.waitFor("testthing"):init() multi:newTLoop(function() print("------") for i,v in pairs(test.tab) do print("T1",i,v) end end,1) multi:mainloop() end) multi:newSystemThread("Test_Thread_1",function() require("multi") test = sThread.waitFor("testthing"):init() multi:newTLoop(function() print("------") for i,v in pairs(test.tab) do print("T2",i,v) end end,1) multi:mainloop() end) test = multi:newSystemThreadedTable("testthing"):init() multi:newTLoop(function() local a,b = multi.randomString(8),multi.randomString(4) print(">",a,b) test[a]=b end,1) multi:mainloop()

Update: 1.9.2

Added:

stamper = multi:newTimeStamper() stamper:OnTime(int hour,int minute,int second,func) or stamper:OnTime(string time,func) time as 00:00:00 stamper:OnHour(int hour,func) stamper:OnMinute(int minute,func) stamper:OnSecond(int second,func) stamper:OnDay(int day,func) or stamper:OnDay(string day,func) Mon, Tues, Wed, etc... stamper:OnMonth(int month,func) stamper:OnYear(int year,func)

Improved:

Fixed:

Changed:

Update: 1.9.1 - Threads can now argue

Added:

Updated:

Update: 1.9.0

Added:

Works on threads and regular objects. Requires the latest bin library to work!

talarm=multi:newThreadedAlarm("AlarmTest",5) talarm:OnRing(function() print("Ring!") end) bin.new(talarm:ToString()):tofile("test.dat") -- multi:newFromString(bin.load("test.dat"))

A more seamless way to use this will be made in the form of state saving. This is still a WIP processes, timers, timemasters, watchers, and queuers have not been worked on yet

Update: 1.8.7

Added:

function test(a,b,c) print("Running...") a=0 for i=1,1000000000 do a=a+1 end return a,b+c end print(multi.timer(test,1,2,3)) print(multi.timer(test,1,2,3)) -- multi.timer returns the time taken then the arguments from the function... Uses unpack so careful of nil values!

Update: 1.8.6

Added:

This will run said function in every thread.

-- Going to use love2d code this time, almost the same as last time... See ramblings require("core.Library") GLOBAL,sThread=require("multi.integration.loveManager").init() -- load the love2d version of the lanesManager and requires the entire multi library require("core.GuiManager") gui.ff.Color=Color.Black jQueue=multi:newSystemThreadedJobQueue() jQueue:registerJob("TEST_JOB",function(a,s) math.randomseed(s) TEST_JOB2() return math.random(0,255) end) jQueue:registerJob("TEST_JOB2",function() print("Test Works!") end) -- 1.8.6 EXAMPLE Change jQueue:start() -- This is now needed! -- jQueue:doToAll(function() print("Doing this 2? times!") end) tableOfOrder={} jQueue.OnJobCompleted(function(JOBID,n) tableOfOrder[JOBID]=n if #tableOfOrder==10 then t.text="We got all of the pieces!" end end) for i=1,10 do -- Job Name of registered function, ... varargs jQueue:pushJob("TEST_JOB","This is a test!",math.random(1,1000000)) end t=gui:newTextLabel("no done yet!",0,0,300,100) t:centerX() t:centerY()

Update: 1.8.5

Added:

Allows the execution of system calls without hold up. It is possible to do the same using io.popen()! You decide which works best for you!

local GLOBAL,sThread=require("multi.integration.lanesManager").init() cmd=multi:newSystemThreadedExecute("SystemThreadedExecuteTest.lua") -- This file is important! cmd.OnCMDFinished(function(code) -- callback function to grab the exit code... Called when the command goes through print("Got Code: "..code) end) multi:newTLoop(function() print("...") -- lets show that we aren't being held up end,1) multi:mainloop()

Update: 1.8.4

Added:

Using multi:newSystemThreadedJobQueue()

First you need to create the object This works the same way as love2d as it does with lanes... It is getting harder to make both work the same way with speed in mind... Anyway...

-- Creating the object using lanes manager to show case this. Examples has the file for love2d local GLOBAL,sThread=require("multi.integration.lanesManager").init() jQueue=multi:newSystemThreadedJobQueue(n) -- this internally creates System threads. By default it will use the # of processors on your system You can set this number though. -- Only create 1 jobqueue! For now, making more than 1 is not supported. You only really need one though. Just register new functions if you want 1 queue to do more. The one reason though is keeping track of jobIDs. I have an idea that I will roll out in the ~~next update~~ eventually. jQueue:registerJob("TEST_JOB",function(a,s) math.randomseed(s) -- We will push a random # TEST_JOB2() -- You can call other registered functions as well! return math.random(0,255) -- send the result to the main thread end) jQueue:registerJob("TEST_JOB2",function() print("Test Works!") -- this is called from the job since it is registered on the same queue end) tableOfOrder={} -- This is how we will keep order of our completed jobs. There is no guarantee that the order will be correct jQueue.OnJobCompleted(function(JOBID,n) -- whenever a job is completed you hook to the event that is called. This passes the JOBID filled by the returns of the job -- JOBID is the completed job, starts at 1 and counts up by 1. -- Threads finish at different times so jobIDs may be passed out of order! Be sure to have a way to order them tableOfOrder[JOBID]=n -- we order ours by putting them into a table if #tableOfOrder==10 then print("We got all of the pieces!") end end) -- Lets push the jobs now for i=1,10 do -- Job Name of registered function, ... varargs jQueue:pushJob("TEST_JOB","This is a test!",math.random(1,1000000)) end print("I pushed all of the jobs :)") multi:mainloop() -- Start the main loop :D

That’s it from this version!

Update: 1.8.3 - Mainloop recieves some needed overhauling

Added:

New Mainloop functions Below you can see the slight differences... Function overhead is not too bad in lua but has a real difference. multi:mainloop() and multi:unprotectedMainloop() use the same algorithm yet the dedicated unprotected one is slightly faster due to having less function overhead.

* The OG mainloop function remains the same and old methods to achieve what we have with the new ones still exist

These new methods help by removing function overhead that is caused through the original mainloop function. The one downside is that you no longer have the flexibility to change the processing during runtime.

However there is a work around! You can use processes to run multiobjs as well and use the other methods on them.

I may make a full comparison between each method and which is faster, but for now trust that the dedicated ones with less function overhead are infect faster. Not by much but still faster.

Update: 1.8.2

Added:

The threaded table is setup just like the threaded queue.
It provids GLOBAL like features without having to write to GLOBAL!
This is useful for module creators who want to keep their data private, but also use GLOBAL like coding.
It has a few features that makes it a bit better than plain ol GLOBAL (For now...) (ThreadedTable - TT for short) This was modified by a recent version that removed the need for a sync command

we also have the "sync" method, this one was made for love2d because we do a syncing trick to get data in a table format. The lanes side has a sync method as well so no worries. Using indexing calls sync once and may grab your variable. This allows you to have the lanes indexing 'like' syntax when doing regular indexing in love2d side of the module. As of right now both sides work flawlessly! And this effect is now the GLOBAL as well

On GLOBALS sync is a internal method for keeping the GLOBAL table in order. You can still use sThread.waitFor(name) to wait for variables that may or may not yet exist!

Time for some examples: Using multi:newSystemThreadedTable(name)

-- lanes Desktop lua! NOTE: this is in lanesintergratetest6.lua in the examples folder local GLOBAL,sThread=require("multi.integration.lanesManager").init() test=multi:newSystemThreadedTable("YO"):init() test["test1"]="lol" multi:newSystemThread("test",function() tab=sThread.waitFor("YO"):init() print(tab:has("test1")) sThread.sleep(3) tab["test2"]="Whats so funny?" end) multi:newThread("test2",function() print(test:waitFor("test2")) end) multi:mainloop()
-- love2d lua! NOTE: this is in main4.lua in the love2d examples require("core.Library") GLOBAL,sThread=require("multi.integration.loveManager").init() -- load the love2d version of the lanesManager and requires the entire multi library require("core.GuiManager") gui.ff.Color=Color.Black test=multi:newSystemThreadedTable("YO"):init() test["test1"]="lol" multi:newSystemThread("test",function() tab=sThread.waitFor("YO"):init() print(tab["test1"]) sThread.sleep(3) tab["test2"]="Whats so funny?" end) multi:newThread("test2",function() print(test:waitFor("test2")) t.text="DONE!" end) t=gui:newTextLabel("not done yet!",0,0,300,100) t:centerX() t:centerY()

Update: 1.8.1

No real change!
Changed the structure of the library. Combined the coroutine based threads into the core!
Only compat and integrations are not part of the core and never will be by nature.
This should make the library more convent to use.
I left multi/all.lua file so if anyone had libraries/projects that used that it will still work!
Updated from 1.7.6 to 1.8.0
(How much thread could a thread htread if a thread could thread thread?) Added:

Using multi:systemThreadedBenchmark()

package.path="?/init.lua;"..package.path local GLOBAL,sThread=require("multi.integration.lanesManager").init() multi:systemThreadedBenchmark(3):OnBench(function(self,count) print("First Bench: "..count) multi:systemThreadedBenchmark(3,"All Threads: ") end) multi:mainloop()

Using multi:newSystemThreadedQueue()

Quick Note: queues shared across multiple objects will be pulling from the same "queue" keep this in mind when coding! Also the queue respects direction a push on the thread side cannot be popped on the thread side... Same goes for the mainthread!
Turns out I was wrong about this...

-- in love2d, this file will be in the same example folder as before, but is named main2.lua require("core.Library") GLOBAL,sThread=require("multi.integration.loveManager").init() -- load the love2d version of the lanesManager and requires the entire multi library --IMPORTANT -- Do not make the above local, this is the one difference that the lanesManager does not have -- If these are local the functions will have the upvalues put into them that do not exist on the threaded side -- You will need to ensure that the function does not refer to any upvalues in its code. It will print an error if it does though -- Also, each thread has a .1 second delay! This is used to generate a random value for each thread! require("core.GuiManager") gui.ff.Color=Color.Black function multi:newSystemThreadedQueue(name) -- in love2d this will spawn a channel on both ends local c={} c.name=name if love then if love.thread then function c:init() self.chan=love.thread.getChannel(self.name) function self:push(v) self.chan:push(v) end function self:pop() return self.chan:pop() end GLOBAL[self.name]=self return self end return c else error("Make sure you required the love.thread module!") end else c.linda=lanes.linda() function c:push(v) self.linda:send("Q",v) end function c:pop() return ({self.linda:receive(0,"Q")})[2] end function c:init() return self end GLOBAL[name]=c end return c end queue=multi:newSystemThreadedQueue("QUEUE"):init() queue:push("This is a test") queue:push("This is a test2") queue:push("This is a test3") queue:push("This is a test4") multi:newSystemThread("test2",function() queue=sThread.waitFor("QUEUE"):init() data=queue:pop() while data do print(data) data=queue:pop() end queue:push("DONE!") end) multi:newThread("test!",function() thread.hold(function() return queue:pop() end) t.text="Done!" end) t=gui:newTextLabel("no done yet!",0,0,300,100) t:centerX() t:centerY()

In Lanes

-- The code is compatible with each other, I just wanted to show different things you can do in both examples -- This file can be found in the examples folder as lanesintegrationtest4.lua local GLOBAL,sThread=require("multi.integration.lanesManager").init() queue=multi:newSystemThreadedQueue("QUEUE"):init() queue:push("This is a test") queue:push("This is a test2") queue:push("This is a test3") queue:push("This is a test4") multi:newSystemThread("test2",function() queue=sThread.waitFor("QUEUE"):init() data=queue:pop() while data do print(data) data=queue:pop() end queue:push("This is a test5") queue:push("This is a test6") queue:push("This is a test7") queue:push("This is a test8") end) multi:newThread("test!",function() -- this is a lua thread thread.sleep(.1) data=queue:pop() while data do print(data) data=queue:pop() end end) multi:mainloop()

Update: 1.7.6

Fixed:

Typos like always Added:

Improved:

Update: 1.7.5

Fixed some typos in the readme... (I am sure there are more there are always more)
Added more features for module support
TODO:
Work on performance of the library... I see 3 places where I can make this thing run quicker

I'll show case some old versions of the multitasking library eventually so you can see its changes in days past!

Update: 1.7.4

Added: the example folder which will be populated with more examples in the near future!
The loveManager integration that mimics the lanesManager integration almost exactly to keep coding in both environments as close to possible. This is done mostly for library creation support!
An example of the loveManager in action using almost the same code as the lanesintergreationtest2.lua
NOTE: This code has only been tested to work on love2d version 1.10.2 though it should work version 0.9.0

require("core.Library") -- Didn't add this to a repo yet! Will do eventually... Allows for injections and other cool things require("multi.compat.love2d") -- allows for multitasking and binds my libraries to the love2d engine that i am using GLOBAL,sThread=require("multi.integration.loveManager").init() -- load the love2d version of the lanesManager --IMPORTANT -- Do not make the above local, this is the one difference that the lanesManager does not have -- If these are local the functions will have the upvalues put into them that do not exist on the threaded side -- You will need to ensure that the function does not refer to any upvalues in its code. It will print an error if it does though -- Also each thread has a .1 second delay! This is used to generate a random values for each thread! require("core.GuiManager") -- allows the use of graphics in the program. gui.ff.Color=Color.Black function comma_value(amount) local formatted = amount while true do formatted, k = string.gsub(formatted, "^(-?%d+)(%d%d%d)", '%1,%2') if (k==0) then break end end return formatted end multi:newSystemThread("test1",function() -- Another difference is that the multi library is already loaded in the threaded enviroment as well as a call to multi:mainloop() multi:benchMark(sThread.waitFor("Bench"),nil,"Thread 1"):OnBench(function(self,c) GLOBAL["T1"]=c multi:Stop() end) end) multi:newSystemThread("test2",function() -- spawns a thread in another lua process multi:benchMark(sThread.waitFor("Bench"),nil,"Thread 2"):OnBench(function(self,c) GLOBAL["T2"]=c multi:Stop() end) end) multi:newSystemThread("test3",function() -- spawns a thread in another lua process multi:benchMark(sThread.waitFor("Bench"),nil,"Thread 3"):OnBench(function(self,c) GLOBAL["T3"]=c multi:Stop() end) end) multi:newSystemThread("test4",function() -- spawns a thread in another lua process multi:benchMark(sThread.waitFor("Bench"),nil,"Thread 4"):OnBench(function(self,c) GLOBAL["T4"]=c multi:Stop() end) end) multi:newSystemThread("test5",function() -- spawns a thread in another lua process multi:benchMark(sThread.waitFor("Bench"),nil,"Thread 5"):OnBench(function(self,c) GLOBAL["T5"]=c multi:Stop() end) end) multi:newSystemThread("test6",function() -- spawns a thread in another lua process multi:benchMark(sThread.waitFor("Bench"),nil,"Thread 6"):OnBench(function(self,c) GLOBAL["T6"]=c multi:Stop() end) end) multi:newSystemThread("Combiner",function() -- spawns a thread in another lua process function comma_value(amount) local formatted = amount while true do formatted, k = string.gsub(formatted, "^(-?%d+)(%d%d%d)", '%1,%2') if (k==0) then break end end return formatted end local b=comma_value(tostring(sThread.waitFor("T1")+sThread.waitFor("T2")+sThread.waitFor("T3")+sThread.waitFor("T4")+sThread.waitFor("T5")+sThread.waitFor("T6"))) GLOBAL["DONE"]=b end) multi:newThread("test0",function() -- sThread.waitFor("DONE") -- lets hold the main thread completely so we don't eat up cpu -- os.exit() -- when the main thread is holding there is a chance that error handling on the system threads may not work! -- instead we can do this while true do thread.skip(1) -- allow error handling to take place... Otherwise let’s keep the main thread running on the low -- Before we held just because we could... But this is a game and we need to have logic continue --sThreadM.sleep(.001) -- Sleeping for .001 is a great way to keep cpu usage down. Make sure if you aren't doing work to rest. Abuse the hell out of GLOBAL if you need to :P if GLOBAL["DONE"] then t.text="Bench: "..GLOBAL["DONE"] end end end) GLOBAL["Bench"]=3 t=gui:newTextLabel("no done yet!",0,0,300,100) t:centerX() t:centerY()

Update: 1.7.3

Changed how requiring the library works! require("multi.all") Will still work as expected; however, with the exception of threading, compat, and integrations everything else has been moved into the core of the library.

-- This means that these are no longer required and will cause an error if done so require("multi.loop") require("multi.alarm") require("multi.updater") require("multi.tloop") require("multi.watcher") require("multi.tstep") require("multi.step") require("multi.task") -- ^ they are all part of the core now

Update: 1.7.2

Moved updaters, loops, and alarms into the init.lua file. I consider them core features and they are referenced in the init.lua file so they need to exist there. Threaded versions are still separate though. Added another example file

Update: 1.7.1 - Bug Fixes Only

¯\(ツ)

Update: 1.7.0 - Threading the systems

Modified: multi.integration.lanesManager.lua It is now in a stable and simple state works with the latest lanes version! Tested with version 3.11 I cannot promise that everything will work with earlier versions. Future versions are good though.
Example Usage:
sThread is a handle to a global interface for system threads to interact with themselves
thread is the interface for multithreads as seen in the threading section

GLOBAL a table that can be used throughout each and every thread

sThreads have a few methods
sThread.set(name,val) -- you can use the GLOBAL table instead modifies the same table anyway
sThread.get(name) -- you can use the GLOBAL table instead modifies the same table anyway
sThread.waitFor(name) -- waits until a value exists, if it does it returns it
sThread.getCores() -- returns the number of cores on your cpu
sThread.sleep(n) -- sleeps for a bit stopping the entire thread from running
sThread.hold(n) -- sleeps until a condition is met

local GLOBAL,sThread=require("multi.integration.lanesManager").init() require("multi.all") multi:newAlarm(2):OnRing(function(self) GLOBAL["NumOfCores"]=sThread.getCores() end) multi:newAlarm(7):OnRing(function(self) GLOBAL["AnotherTest"]=true end) multi:newAlarm(13):OnRing(function(self) GLOBAL["FinalTest"]=true end) multi:newSystemThread("test",function() -- spawns a thread in another lua process require("multi.all") -- now you can do all of your coding with the multi library! You could even spawn more threads from here with the integration. You would need to require the interaction again though print("Waiting for variable: NumOfCores") print("Got it: ",sThread.waitFor("NumOfCores")) sThread.hold(function() return GLOBAL["AnotherTest"] -- note this would hold the entire systemthread. Spawn a coroutine thread using multi:newThread() or multi:newThreaded... end) print("Holding works!") multi:newThread("tests",function() thread.hold(function() return GLOBAL["FinalTest"] -- note this will not hold the entire systemthread. As seen with the TLoop constantly going! end) print("Final test works!") os.exit() end) local a=0 multi:newTLoop(function() a=a+1 print(a) end,.5) multi:mainloop() end) multi:mainloop()

Update: 1.6.0

Changed:

-- Was step:OnStep(function(pos,self) -- same goes for tsteps as well print(pos) end) multi:newLoop(function(dt,self) print(dt) end) -- Is now step:OnStep(function(self,pos) -- same goes for tsteps as wellc print(pos) end) multi:newLoop(function(self,dt) print(dt) end)

Reasoning I wanted to keep objects consistent, but a lot of my older libraries use the old way of doing things. Therefore, I added a backwards module

Note from the future: That module has been canned. To be honest most features this low in the changelog are outdated and probably do not work.

require("multi.all") require("multi.compat.backwards[1,5,0]") -- allows for the use of features that were scrapped/changed in 1.6.0+

Update: 1.5.0

Added:

Update: 1.4.1 (4/10/2017) - First Public release of the library

Added:

Change:

Note: Wow you looked back this far. Nice, while your at it take a look at the old versions to view the code how it was before my first initial release

Upcomming:

Update: 1.4.0 (3/20/2017)

Added:


require("multimanager") -- require the library int1=multi:newProcess() -- create a process int1.NAME="int1" -- give it a name for example purposes int2=multi:newProcess() -- create another process to reallocate int2.NAME="int2" -- name this a different name step=int1:newTStep(1,10) -- create a TStep so we can slowly see what is going on step:OnStep(function(p,s) -- connect to the onstep event print(p,s.Parent.NAME) -- print the position and process name end) step:OnEnd(function(s) -- when the step ends lets reallocate it to the other process if s.Parent.NAME=="int1" then -- lets only do this if it is in the int1 process s:reallocate(int2) -- send it to int2 s:Reset() -- reset the object else print("We are done!") os.exit() -- end the program when int2 did its thing end end) int1:Start() -- start process 1 int2:Start() -- start process 2 multi:mainloop() -- start the main loop

Fixed/Updated:


int=multi:newQueuer() step=int:newTStep(1,10,1,.5) alarm=int:newAlarm(2) step2=int:newTStep(1,5,1,.5) step:OnStep(function(p,s) print(p) end) step2:OnStep(function(p,s) print(p,"!") end) alarm:OnRing(function(a) print("Ring1!!!") end) int:OnQueueCompleted(function(s) s:Pause() print("Done!") os.exit() end) int:Start() multi:mainloop()

Update: 1.3.0 (1/29/2017)

Added:

Update: 1.2.0 (12.31.2016)

Added:


Update: 1.1.0

Changed:

OnUpdate=multi:newConnection() OnUpdate:connect(function(...) print("Updating",...) end) OnUpdate:Fire(1,2,3)

New Usage:

OnUpdate=multi:newConnection() OnUpdate(function(...) print("Updating",...) end) OnUpdate:Fire(1,2,3)

Update: 1.0.0

Added:

Update: 0.6.3

Note: No official changelog was made for versions this old. Doing code comparsions is way too much work

Update: 0.6.2

Note: No official changelog was made for versions this old. Doing code comparsions is way too much work

Update: 0.6.1-6

Note: No official changelog was made for versions this old. Doing code comparsions is way too much work

Update: 0.5.1-6

Note: No official changelog was made for versions this old. Doing code comparsions is way too much work

Update: 0.4.1

Note: No official changelog was made for versions this old. Doing code comparsions is way too much work

Update: 0.3.0 - The update that started it all

Changed:

Update: EventManager 2.0.0

Changed:


Update: EventManager 1.2.0

Changed:

Added:

Update: EventManager 1.1.0

Added:

Update: EventManager 1.0.0 - Error checking

Changed:

Version: EventManager 0.0.1 - In The Beginning things were very different

Usage:

require("BasicCommands") require("EventManager") event:setAlarm("test",1) a=0 b=0 function Alarm_test(tag) print("Alarm: "..tag) a = 1 b=b+1 event:updateAlarm("test",1) end event:setEvent("test()","a == 1") event:setEvent("test2()","b == 5") function Event_test() print("Event! A=1 which means the alarm rang") a = 0 end function Event_test2() event:Stop() end -- this might feel somewhat familiar step = event:createStep("test",10,true) function Step_test(pos) print(pos,self) end function Step_test_End(tag) step:Remove() end function event:OnCreate() print(event:stepExist("test")) print(event:eventExist("test")) print(event:alarmExist("test")) end function event:OnUpdate() -- Called every cycle end function event:OnClose() print("Manager was stopped!") end event:Manager() --event:CManager() --event:UManager() -- One of the few things that lived in name and spirit the u just became lowercase haha