-
v16.0.1 - Bugfix Stable
released this
2024-03-24 23:29:43 -04:00 | 0 commits to master since this releaseUpdate 16.0.1 - Bug fix
Fixed
- thread.pushStatus() wasn't properly working when forwarding events from THREAD.pushStatus OnStatus connection. This bug also caused stack overflow errors with the following code
func = thread:newFunction(function() for i=1,10 do thread.sleep(1) thread.pushStatus(i) end end) func2 = thread:newFunction(function() local ref = func() ref.OnStatus(function(num) -- do stuff with this data thread.pushStatus(num*2) -- Technically this is not ran within a thread. This is ran outside of a thread inside the thread handler. end) end) local handler = func2() handler.OnStatus(function(num) print(num) end) multi:mainloop()Downloads
-
V16.0.0 Stable
released this
2024-02-24 23:57:42 -05:00 | 0 commits to v16.0.0 since this releaseUpdate 16.0.0 - Getting the priorities straight
Added New Integration: priorityManager
Allows the user to have multi auto set priorities (Requires chronos). Also adds the functionality to create your own runners (multi:mainloop(), multi:umanager()) that you can set using the priority manager. Even if you do not have
chronosinstalled all other features will still work!- Allows the creation of custom priorityManagers
Added
-
thread.defer(func) -- When using a co-routine thread or co-routine threaded function, defer will call it's function at the end of the the threads life through normal execution or an error. In the case of a threaded function, when the function returns or errors.
-
multi:setTaskDelay(delay), Tasks which are now tied to a processor can have an optional delay between the execution between each task. Useful perhaps for rate limiting. Without a delay all grouped tasks will be handled in one step.
delaycan be a function as well and will be processed as if thread.hold was called. -
processor's now have a boost function which causes it to run its processes the number of times specified in the
boost(count)function -
thread.hold will now use a custom hold method for objects with a
Holdmethod. This is called likeobj:Hold(opt). The only argument passed is the optional options table that thread.hold can pass. There is an exception for connection objects. While they do contain a Hold method, the Hold method isn't used and is there for proxy objects, though they can be used in non proxy/thread situations. Hold returns all the arguments that the connection object was fired with. -
shared_table = STP:newSharedTable(tbl_name) -- Allows you to create a shared table that all system threads in a process have access to. Returns a reference to that table for use on the main thread. Sets
_G[tbl_name]on the system threads so you can access it there.package.path = "?/init.lua;?.lua;"..package.path multi, thread = require("multi"):init({print=true}) THREAD, GLOBAL = require("multi.integration.lanesManager"):init() stp = multi:newSystemThreadedProcessor(8) local shared = stp:newSharedTable("shared") shared["test"] = "We work!" for i=1,5 do -- There is a bit of overhead when creating threads on a process. Takes some time, mainly because we are creating a proxy. stp:newThread(function() local multi, thread = require("multi"):init() local shared = _G["shared"] print(THREAD_NAME, shared.test, shared.test2) multi:newAlarm(.5):OnRing(function() -- Play around with the time. System threads do not create instantly. They take quite a bit of time to get spawned. print(THREAD_NAME, shared.test, shared.test2) end) end) end shared["test2"] = "We work!!!" multi:mainloop()Output:
INFO: Integrated Lanes Threading! STJQ_cPXT8GOx We work! nil STJQ_hmzdYDVr We work! nil STJQ_3lwMhnfX We work! nil STJQ_hmzdYDVr We work! nil STJQ_cPXT8GOx We work! nil STJQ_cPXT8GOx We work! We work!!! STJQ_hmzdYDVr We work! We work!!! STJQ_3lwMhnfX We work! We work!!! STJQ_hmzdYDVr We work! We work!!! STJQ_cPXT8GOx We work! We work!!! -
multi:chop(obj) -- We cannot directly interact with a local object on lanes, so we chop the object and set some globals on the thread side. Should use like:
mulit:newProxy(multi:chop(multi:newThread(function() ... end))) -
multi:newProxy(ChoppedObject) -- Creates a proxy object that allows you to interact with an object on a thread
Note: Objects with __index=table do not work with the proxy object! The object must have that function in it's own table for proxy to pick it up and have it work properly. Connections on a proxy allow you to subscribe to an event on the thread side of things. The function that is being connected to happens on the thread!
-
multi:newSystemThreadedProcessor(name) -- Works like newProcessor(name) each object created returns a proxy object that you can use to interact with the objects on the system thread
package.path = "?/init.lua;?.lua;"..package.path multi, thread = require("multi"):init({print=true}) THREAD, GLOBAL = require("multi.integration.lanesManager"):init() stp = multi:newSystemThreadedProcessor("Test STP") alarm = stp:newAlarm(3) alarm._OnRing:Connect(function(alarm) print("Hmm...", THREAD_NAME) end)Output:
Hmm... SystemThreadedJobQueue_A5tpInternally the SystemThreadedProcessor uses a JobQueue to handle things. The proxy function allows you to interact with these objects as if they were on the main thread, though there actions are carried out on the main thread.
Proxies can also be shared between threads, just remember to use proxy:getTransferable() before transferring and proxy:init() on the other end. (We need to avoid copying over coroutines)
The work done with proxies negates the usage of multi:newSystemThreadedConnection(), the only difference is you lose the metatables from connections.
You cannot connect directly to a proxy connection on the non proxy thread, you can however use proxy_conn:Hold() or thread.hold(proxy_conn) to emulate this, see below.
package.path = "?/init.lua;?.lua;"..package.path multi, thread = require("multi"):init({print=true, warn=true, error=true}) THREAD, GLOBAL = require("multi.integration.lanesManager"):init() stp = multi:newSystemThreadedProcessor(8) tloop = stp:newTLoop(nil, 1) multi:newSystemThread("Testing proxy copy",function(tloop) local function tprint (tbl, indent) if not indent then indent = 0 end for k, v in pairs(tbl) do formatting = string.rep(" ", indent) .. k .. ": " if type(v) == "table" then print(formatting) tprint(v, indent+1) else print(formatting .. tostring(v)) end end end local multi, thread = require("multi"):init() tloop = tloop:init() print("tloop type:",tloop.Type) print("Testing proxies on other threads") thread:newThread(function() while true do thread.hold(tloop.OnLoop) print(THREAD_NAME,"Loopy") end end) tloop.OnLoop(function(a) print(THREAD_NAME, "Got loop...") end) multi:mainloop() end, tloop:getTransferable()).OnError(multi.error) print("tloop", tloop.Type) thread:newThread(function() print("Holding...") thread.hold(tloop.OnLoop) print("Held on proxied no proxy connection 1") end).OnError(print) thread:newThread(function() tloop.OnLoop:Hold() print("held on proxied no proxy connection 2") end) tloop.OnLoop(function() print("OnLoop",THREAD_NAME) end) thread:newThread(function() while true do tloop.OnLoop:Hold() print("OnLoop",THREAD_NAME) end end).OnError(multi.error) multi:mainloop()Output:
INFO: Integrated Lanes Threading! 1 tloop proxy Holding... tloop type: proxy Testing proxies on other threads OnLoop STJQ_W9SZGB6Y STJQ_W9SZGB6Y Got loop... OnLoop MAIN_THREAD Testing proxy copy Loopy Held on proxied no proxy connection 1 held on proxied no proxy connection 2 OnLoop STJQ_W9SZGB6Y STJQ_W9SZGB6Y Got loop... Testing proxy copy Loopy OnLoop MAIN_THREAD OnLoop STJQ_W9SZGB6Y STJQ_W9SZGB6Y Got loop... ... (Will repeat every second) Testing proxy copy Loopy OnLoop MAIN_THREAD OnLoop STJQ_W9SZGB6Y STJQ_W9SZGB6Y Got loop... ...The proxy version can only subscribe to events on the proxy thread, which means that connection metamethods will not work with the proxy version (
_OnRingon the non proxy thread side), but the (OnRing) version will work. Cleverly handling the proxy thread and the non proxy thread will allow powerful connection logic. Also this is not a full system threaded connection. Proxies should only be used between 2 threads! To keep things fast I'm using simple queues to transfer data. There is no guarantee that things will work!Currently supporting:
- proxyLoop = STP:newLoop(...)
- proxyTLoop = STP:newTLoop(...)
- proxyUpdater = STP:newUpdater(...)
- proxyEvent = STP:newEvent(...)
- proxyAlarm = STP:newAlarm(...)
- proxyStep = STP:newStep(...)
- proxyTStep = STP:newTStep(...)
- proxyThread = STP:newThread(...)
- proxyService = STP:newService(...)
- threadedFunction = STP:newFunction(...)
Unique:
- STP:newSharedTable(name)
STP functions (The ones above) cannot be called within coroutine based thread when using lanes. This causes thread.hold to break. Objects(proxies) returned by these functions are ok to use in coroutine based threads!
package.path = "?/init.lua;?.lua;"..package.path multi, thread = require("multi"):init({print=true}) THREAD, GLOBAL = require("multi.integration.lanesManager"):init() stp = multi:newSystemThreadedProcessor() alarm = stp:newAlarm(3) alarm.OnRing:Connect(function(alarm) print("Hmm...", THREAD_NAME) end) thread:newThread(function() print("Holding...") local a = thread.hold(alarm.OnRing) -- it works :D print("We work!") end) multi:mainloop() -
multi.OnObjectDestroyed(func(obj, process)) now supplies obj, process just like OnObjectCreated
-
thread:newProcessor(name) -- works mostly like a normal process, but all objects are wrapped within a thread. So if you create a few loops, you can use thread.hold() call threaded functions and wait and use all features that using coroutines provide.
-
multi.Processors:getHandler() -- returns the thread handler for a process
-
multi.OnPriorityChanged(self, priority) -- Connection is triggered whenever the priority of an object is changed!
-
multi.setClock(clock_func) -- If you have access to a clock function that works like os.clock() you can set it using this function. The priorityManager if chronos is installed sets the clock to it's current version.
-
multi:setCurrentTask() -- Used to set the current processor. Used in custom processors.
-
multi:setCurrentProcess() -- Used to set the current processor. It should only be called on a processor object
-
multi.success(...) -- Sends a success. Green
SUCCESSmainly used for tests -
multi.warn(...) -- Sends a warning. Yellow
WARNING -
multi.error(err) -- When called this function will gracefully kill multi, cleaning things up. Red
ERRORNote: If you want to have multi.print, multi.warn and multi.error to work you need to enable them in settings
multi, thread = require("multi"):init { print=true, warn=true, error=true -- Errors will throw regardless. Setting to true will -- cause the library to force hard crash itself! } -
THREAD.exposeEnv(name) -- Merges set env into the global namespace of the system thread it was called in.
-
THREAD.setENV(table [, name]) -- Set a simple table that will be merged into the global namespace. If a name is supplied the global namespace will not be merged. Call THREAD.exposeEnv(name) to expose that namespace within a thread.
Note: To maintain compatibility between each integration use simple tables. No self references, and string indices only.
THREAD.setENV({ shared_function = function() print("I am shared!") end })When this function is used it writes to a special variable that is read at thread spawn time. If this function is then ran later it can be used to set a different env and be applied to future spawned threads.
-
THREAD.getENV() can be used to manage advanced uses of the setENV() functionality
-
Connection objects now support the % function. This supports a function % connection object. What it does is allow you to modify the incoming arguments of a connection event.
local conn1 = multi:newConnection() local conn2 = function(a,b,c) return a*2, b*2, c*2 end % conn1 conn2(function(a,b,c) print("Conn2",a,b,c) end) conn1(function(a,b,c) print("Conn1",a,b,c) end) conn1:Fire(1,2,3) conn2:Fire(1,2,3)Output:
Conn2 2 4 6 Conn1 1 2 3 Conn2 1 2 3Note: Conn1 does not get modified, however firing conn1 will also fire conn2 and have it's arguments modified. Also firing conn2 directly does not modify conn2's arguments!
See it's implementation below:__mod = function(obj1, obj2) local cn = multi:newConnection() if type(obj1) == "function" and type(obj2) == "table" then obj2(function(...) cn:Fire(obj1(...)) end) else error("Invalid mod!", type(obj1), type(obj2),"Expected function, connection(table)") end return cn end -
The len operator
#will return the number of connections in the object!local conn = multi:newConnection() conn(function() print("Test 1") end) conn(function() print("Test 2") end) conn(function() print("Test 3") end) conn(function() print("Test 4") end) print(#conn)Output:
4 -
Connection objects can be negated -conn returns self so conn = -conn, reverses the order of connection events
local conn = multi:newConnection() conn(function() print("Test 1") end) conn(function() print("Test 2") end) conn(function() print("Test 3") end) conn(function() print("Test 4") end) print("Fire 1") conn:Fire() conn = -conn print("Fire 2") conn:Fire()Output:
Fire 1 Test 1 Test 2 Test 3 Test 4 Fire 2 Test 4 Test 3 Test 2 Test 1 -
Connection objects can be divided, function / connection
This is a mix between the behavior between mod and concat, where the original connection can forward it's events to the new one as well as do a check like concat can. View it's implementation below:__div = function(obj1, obj2) -- / local cn = self:newConnection() local ref if type(obj1) == "function" and type(obj2) == "table" then obj2(function(...) local args = {obj1(...)} if args[1] then cn:Fire(multi.unpack(args)) end end) else multi.error("Invalid divide! ", type(obj1), type(obj2)," Expected function/connection(table)") end return cn end -
Connection objects can now be concatenated with functions, not each other. For example:
multi, thread = require("multi"):init{print=true,findopt=true} local conn1, conn2 = multi:newConnection(), multi:newConnection() conn3 = conn1 + conn2 conn1(function() print("Hi 1") end) conn2(function() print("Hi 2") end) conn3(function() print("Hi 3") end) function test(a,b,c) print("I run before all and control if execution should continue!") return a>b end conn4 = test .. conn1 conn5 = conn2 .. function() print("I run after it all!") end conn4:Fire(3,2,3) -- This second one won't trigger the Hi's conn4:Fire(1,2,3) conn5(function() print("Test 1") end) conn5(function() print("Test 2") end) conn5(function() print("Test 3") end) conn5:Fire()Output:
I run before all and control if things go! Hi 3 Hi 1 Test 1 Test 2 Test 3 I run after it all!Note: Concat of connections does modify internal events on both connections depending on the direction func .. conn or conn .. func See implemention below:
__concat = function(obj1, obj2) local cn = multi:newConnection() local ref if type(obj1) == "function" and type(obj2) == "table" then cn(function(...) if obj1(...) then obj2:Fire(...) end end) cn.__connectionAdded = function(conn, func) cn:Unconnect(conn) obj2:Connect(func) end elseif type(obj1) == "table" and type(obj2) == "function" then ref = cn(function(...) obj1:Fire(...) obj2(...) end) cn.__connectionAdded = function() cn.rawadd = true cn:Unconnect(ref) ref = cn(function(...) if obj2(...) then obj1:Fire(...) end end) end else error("Invalid concat!", type(obj1), type(obj2),"Expected function/connection(table), connection(table)/function") end return cn end
Changed
-
multi:newTask(task) is not tied to the processor it is created on.
-
multi:getTasks()renamed tomulti:getRunners(), should help with confusion between multi:newTask() -
changed how multi adds unpack to the global namespace. Instead we capture that value into multi.unpack.
-
multi:newUpdater(skip, func) -- Now accepts func as the second argument. So you don't need to call OnUpdate(func) after creation.
-
multi errors now internally call
multi.errorinstead ofmulti.print -
Actors Act() method now returns true when the main event is fired. Steps/Loops always return true. Nil is returned otherwise.
-
Connection:Connect(func, name) Now you can supply a name and name the connection.
-
Connection:getConnection(name) This will return the connection function which you can do what you will with it.
-
Fast connections are the only connections. Legacy connections have been removed completely. Not much should change on the users end. Perhaps some minor changes.
-
conn:Lock(conn) When supplied with a connection reference (What is returned by Connect(func)) it will only lock that connection Reference and not the entire connection. Calling without any arguments will lock the entire connection.
-
connUnlock(conn) When supplied with a connection reference it restores that reference and it can be fired again. When no arguments are supplied it unlocks the entire connection.
Note: Lock and Unlock when supplied with arguments and not supplied with arguments operate on different objects. If you unlock an entire connection. Individual connection refs will not unlock. The same applies with locking. The entire connection and references are treated differently.
-
multi.OnObjectCreated is only called when an object is created in a particular process. Proc.OnObjectCreated is needed to detect when an object is created within a process.
-
multi.print shows "INFO" before it's message. Blue
INFO -
Connections internals changed, not too much changed on the surface.
-
newConnection(protect, func, kill)
protectdisables fastmode, but protects the connectionfuncuses..and appends func to the connection so it calls it after all connections run. There is some internal overhead added when using this, but it isn't much.killremoves the connection when fired
Note: When using protect/kill connections are triggered in reverse order
Removed
- multi.CONNECTOR_LINK -- No longer used
- multi:newConnector() -- No longer used
- THREAD.getName() use THREAD_NAME instead
- THREAD.getID() use THREAD_ID instead
- conn:SetHelper(func) -- With the removal of old Connect this function is no longer needed
- connection events can no longer can be chained with connect. Connect only takes a function that you want to connect
Fixed
- Issue with luajit w/5.2 compat breaking with coroutine.running(), fixed the script to properly handle so thread.isThread() returns as expected!
- Issue with coroutine based threads where they weren't all being scheduled due to a bad for loop. Replaced with a while to ensure all threads are consumed properly. If a thread created a thread that created a thread that may or may not be on the same process, things got messed up due to the original function not being built with these abstractions in mind.
- Issue with thread:newFunction() where a threaded function will keep a record of their returns and pass them to future calls of the function.
- Issue with multi:newTask(func) not properly handling tasks to be removed. Now uses a thread internally to manage things.
- multi.isMainThread was not properly handled in each integration. This has been resolved.
- Issue with pseudo threading env's being messed up. Required removal of getName and getID!
- connections being multiplied together would block the entire connection object from pushing events! This is not the desired effect I wanted. Now only the connection reference involved in the multiplication is locked!
- multi:reallocate(processor, index) has been fixed to work with the current changes of the library.
- Issue with lanes not handling errors properly. This is now resolved
- Oversight with how pushStatus worked with nesting threaded functions, connections and forwarding events. Changes made and this works now!
func = thread:newFunction(function() for i=1,10 do thread.sleep(1) thread.pushStatus(i) end end) func2 = thread:newFunction(function() local ref = func() ref.OnStatus(function(num) -- do stuff with this data thread.pushStatus(num*2) -- Technically this is not ran within a thread. This is ran outside of a thread inside the thread handler. end) end) local handler = func2() handler.OnStatus(function(num) print(num) end)
ToDo
- Network Manager, I know I said it will be in this release, but I'm still planning it out.
Downloads
-
v15.3.1 Stable
released this
2023-01-04 10:33:36 -05:00 | 0 commits to 15.3.1 since this releaseUpdate 15.3.1 - Bug fix
Fixed
- Issue where multiplying connections triggered events improperly
local multi, thread = require("multi"):init() conn1 = multi:newConnection() conn2 = multi:newConnection(); -- To remove function ambiguity (conn1 * conn2)(function() print("Triggered!") end) conn1:Fire() conn2:Fire() -- Looks like this is triggering a response. It shouldn't. We need to account for this conn1:Fire() conn1:Fire() -- Triggering conn1 twice counted as a valid way to trigger the phantom connection (conn1 * conn2) -- Now in 15.3.1, this works properly and the above doesn't do anything. Internally connections are locked until the conditions are met. conn2:Fire()Downloads
-
v15.3.0 Stable
released this
2022-12-31 02:21:01 -05:00 | 0 commits to v15.3.0 since this releaseUpdate 15.3.0 - A world of Connections
Full Update Showcase
multi, thread = require("multi"):init{print=true} GLOBAL, THREAD = require("multi.integration.lanesManager"):init() local conn = multi:newSystemThreadedConnection("conn"):init() multi:newSystemThread("Thread_Test_1",function() local multi, thread = require("multi"):init() local conn = GLOBAL["conn"]:init() conn(function() print(THREAD:getName().." was triggered!") end) multi:mainloop() end) multi:newSystemThread("Thread_Test_2",function() local multi, thread = require("multi"):init() local conn = GLOBAL["conn"]:init() conn(function(a,b,c) print(THREAD:getName().." was triggered!",a,b,c) end) multi:newAlarm(2):OnRing(function() print("Fire 2!!!") conn:Fire(4,5,6) THREAD.kill() end) multi:mainloop() end) conn(function(a,b,c) print("Mainloop conn got triggered!",a,b,c) end) alarm = multi:newAlarm(1) alarm:OnRing(function() print("Fire 1!!!") conn:Fire(1,2,3) end) alarm = multi:newAlarm(3):OnRing(function() multi:newSystemThread("Thread_Test_3",function() local multi, thread = require("multi"):init() local conn = GLOBAL["conn"]:init() conn(function(a,b,c) print(THREAD:getName().." was triggered!",a,b,c) end) multi:newAlarm(2):OnRing(function() print("Fire 3!!!") conn:Fire(7,8,9) end) multi:mainloop() end) end) multi:newSystemThread("Thread_Test_4",function() local multi, thread = require("multi"):init() local conn = GLOBAL["conn"]:init() local conn2 = multi:newConnection() multi:newAlarm(2):OnRing(function() conn2:Fire() end) multi:newThread(function() print("Conn Test!") thread.hold(conn + conn2) print("It held!") end) multi:mainloop() end) multi:mainloop()Added
-
multi:newConnection():Unconnect(conn_link)Fastmode previously didn't have the ability to be unconnected to. This method works with both fastmode and non fastmode.fastModewill be made the default in v16.0.0 (This is a breaking change for those using the Destroy method, use this time to migrate to usingUnconnect()) -
thread.chain(...)allows you to chainthread.hold(FUNCTIONs)togetherwhile true do thread.chain(hold_function_1, hold_function_2) endIf the first function returns true, it moves on to the next one. if expanded it follows:
while true do thread.hold(hold_function_1) thread.hold(hold_function_2) end -
Experimental option to multi settings:
findopt. When set totrueit will print out a message when certain pattern are used with this library. For example if an anonymous function is used in thread.hold() within a loop. The library will trigger a message alerting you that this isn't the most performant way to use thread.hold(). -
multi:newSystemThreadedConnection()Allows one to trigger connection events across threads. Works like how any connection would work. Supports all of the features, can even be
addedwith non SystemThreadedConnections as demonstrated in the full showcase. -
multi:newConnection():SetHelper(func)Sets the helper function that the connection object uses when creating connection links.
-
multi.ForEach(table, callback_function)Loops through the table and calls callback_function with each element of the array.
-
If a name is not supplied when creating threads and threaded objects; a name is randomly generated. Unless sending through an established channel/queue you might not be able to easily init the object.
Changed
-
Internally all
OnErrorevents are now connected to with multi.print, you must passprint=trueto the init settings when initializing the multi object.require("multi"):init{print=true} -
All actors now use fastmode on connections
-
Performance enhancement with processes that are pumped. Instead of automatically running, by suppressing the creation of an internal loop object that would manage the process, we bypass that freeing up memory and adding a bit more speed.
-
Connection:fastMode() or Connection:SetHelper()now returns a reference to itself -
Connection:[connect, hasConnections, getConnection]changed to beConnection:[Connect, HasConnections, getConnections]. This was done in an attempt to follow a consistent naming scheme. The old methods still will work to prevent old code breaking. -
Connections when added(+) together now act like 'or', to get the 'and' feature multiply(*) them together.Note: This is a potentially breaking change for using connections.
multi, thread = require("multi"):init{print=true} -- GLOBAL, THREAD = require("multi.integration.lanesManager"):init() local conn1, conn2, conn3 = multi:newConnection(), multi:newConnection(), multi:newConnection() thread:newThread(function() print("Awaiting status") thread.hold(conn1 + (conn2 * conn3)) print("Conn or Conn2 and Conn3") end) multi:newAlarm(1):OnRing(function() print("Conn") conn1:Fire() end) multi:newAlarm(2):OnRing(function() print("Conn2") conn2:Fire() end) multi:newAlarm(3):OnRing(function() print("Conn3") conn3:Fire() end)
Removed
- Connection objects methods removed:
- holdUT(), HoldUT() -- With the way
thread.hold(conn)interacts with connections this method was no longer needed. To emulate this usemulti.hold(conn).multi.hold()is able to emulate whatthread.hold()outside of a thread, albeit with some drawbacks.
- holdUT(), HoldUT() -- With the way
Fixed
- SystemThreaded Objects variables weren't consistent.
- Issue with connections being multiplied only being able to have a combined fire once
ToDo
- Work on network parallelism (I am really excited to start working on this. Not because it will have much use, but because it seems like a cool addition/project to work on. I just need time to actually do work on stuff)
Downloads
-
-
multi v15.2.x Stable
released this
2022-04-19 18:45:14 -04:00 | 0 commits to v15.2.0 since this releaseUpdate 15.2.1 - Bug Fix
- Fixed issue
Update 15.2.0 - Upgrade Complete
Full Update Showcase
package.path = "./?/init.lua;"..package.path multi, thread = require("multi"):init{print=true} GLOBAL, THREAD = require("multi.integration.threading"):init() -- Using a system thread, but both system and local threads support this! -- Don't worry if you don't have lanes or love2d. PesudoThreading will kick in to emulate the threading features if you do not have access to system threading. func = THREAD:newFunction(function(count) print("Starting Status test: ",count) local a = 0 while true do a = a + 1 THREAD.sleep(.1) -- Push the status from the currently running threaded function to the main thread THREAD.pushStatus(a,count) if a == count then break end end return "Done" end) thread:newThread("test",function() local ret = func(10) ret.OnStatus(function(part,whole) print("Ret1: ",math.ceil((part/whole)*1000)/10 .."%") end) print("TEST",func(5).wait()) -- The results from the OnReturn connection is passed by thread.hold print("Status:",thread.hold(ret.OnReturn)) print("Function Done!") end).OnError(function(...) print("Error:",...) end) local ret = func(10) local ret2 = func(15) local ret3 = func(20) local s1,s2,s3 = 0,0,0 ret.OnError(function(...) print("Error:",...) end) ret2.OnError(function(...) print("Error:",...) end) ret3.OnError(function(...) print("Error:",...) end) ret.OnStatus(function(part,whole) s1 = math.ceil((part/whole)*1000)/10 print(s1) end) ret2.OnStatus(function(part,whole) s2 = math.ceil((part/whole)*1000)/10 print(s2) end) ret3.OnStatus(function(part,whole) s3 = math.ceil((part/whole)*1000)/10 print(s3) end) loop = multi:newTLoop() function loop:testing() print("testing haha") end loop:Set(1) t = loop:OnLoop(function() print("Looping...") end):testing() local proc = multi:newProcessor("Test") local proc2 = multi:newProcessor("Test2") local proc3 = proc2:newProcessor("Test3") proc.Start() proc2.Start() proc3.Start() proc:newThread("TestThread_1",function() while true do thread.sleep(1) end end) proc:newThread("TestThread_2",function() while true do thread.sleep(1) end end) proc2:newThread("TestThread_3",function() while true do thread.sleep(1) end end) thread:newThread(function() thread.sleep(1) local tasks = multi:getStats() for i,v in pairs(tasks) do print("Process: " ..i.. "\n\tTasks:") for ii,vv in pairs(v.tasks) do print("\t\t"..vv:getName()) end print("\tThreads:") for ii,vv in pairs(v.threads) do print("\t\t"..vv:getName()) end end thread.sleep(10) -- Wait 10 seconds then kill the process! os.exit() end) multi:mainloop()Added:
-
multi:getStats()- Returns a structured table where you can access data on processors there tasks and threads:
-- Upon calling multi:getStats() the table below is returned get_Stats_Table { proc_1 -- table proc_2 -- table ... proc_n -- table } proc_Table { tasks = {alarms,steps,loops,etc} -- All multi objects threads = {thread_1,thread_2,thread_3,etc} -- Thread objects } -- Refer to the objects documentation to see how you can interact with them - Reference the Full update showcase for the method in action
- Returns a structured table where you can access data on processors there tasks and threads:
-
multi:newProcessor(name, nothread)- If no thread is true auto sets the processor as Active, so proc.run() will start without the need for proc.Start()
-
multi:getProcessors()- Returns a list of all processors
-
multi:getName()- Returns the name of a processor
-
multi:getFullName()- Returns the fullname/entire process tree of a process
-
Processors can be attached to processors
-
multi:getTasks()- Returns a list of all non thread based objects (loops, alarms, steps, etc)
-
multi:getThreads()- Returns a list of all threads on a process
-
multi:newProcessor(name, nothread).run()- New function run to the processor object to
-
multi:newProcessor(name, nothread):newFunction(func, holdme)- Acts like thread:newFunction(), but binds the execution of that threaded function to the processor
-
multi:newTLoop()member functionsTLoop:Set(set)- Sets the time to wait for the TLoop
-
multi:newStep()member functionsStep:Count(count)- Sets the amount a step should count by
-
multi:newTStep()member functionsTStep:Set(set)- Sets the time to wait for the TStep
Changed:
-
thread.hold(connectionObj)now passes the returns of that connection tothread.hold()! See Exampe below:multi, thread = require("multi"):init() func = thread:newFunction(function(count) local a = 0 while true do a = a + 1 thread.sleep(.1) thread.pushStatus(a,count) if a == count then break end end return "Done", 1, 2, 3 end) thread:newThread("test",function() local ret = func(10) ret.OnStatus(function(part,whole) print("Ret1: ",math.ceil((part/whole)*1000)/10 .."%") end) print("Status:",thread.hold(ret.OnReturn)) print("Function Done!") os.exit() end).OnError(function(...) print("Error:",...) end) multi:mainloop()Output:
Ret1: 10% Ret1: 20% Ret1: 30% Ret1: 40% Ret1: 50% Ret1: 60% Ret1: 70% Ret1: 80% Ret1: 90% Ret1: 100% Status: Done 1 2 3 nil nil nil nil nil nil nil nil nil nil nil nil Function Done! -
Modified how threads are handled internally. This changes makes it so threads "regardless of amount" should not impact performance. What you do in the threads might. This change was made by internally only processing one thread per step per processor. If you have 10 processors that are all active expect one step to process 10 threads. However if one processor has 10 threads each step will only process one thread. Simply put each addition of a thread shouldn't impact performance as it did before.
-
Moved
multi:newThread(...)into the thread interface (thread:newThread(...)), code usingmulti:newThread(...)will still work. Also usingprocess:newThread(...)binds the thread to the process, meaning if the process the thread is bound to is paused so is the thread. -
multi:mainloop(
settings)/multi:uManager(settings) no longer takes a settings argument, that has been moved to multi:init(settings)Setting Description print When set to true parts of the library will print out updates otherwise no internal printing will be done priority When set to true, the library will prioritize different objects based on their priority -
multi:newProcessor(name,nothread)The new argument allows you to tell the system you won't be using the Start() and Stop() functions, rather you will handle the process yourself. Using the proc.run() function. This function needs to be called to pump the events.- Processors now also use lManager instead of uManager.
-
multi.hold(n,opt)now supports an option table like thread.hold does. -
Connection Objects now pass on the parent object if created on a multiobj. This was to allow chaining to work properly with the new update
multi,thread = require("multi"):init() loop = multi:newTLoop() function loop:testing() print("testing haha") end loop:Set(1) t = loop:OnLoop(function() print("Looping...") end):testing() multi:mainloop() --[[Returns as expected: testing haha Looping... Looping... Looping... ... Looping... Looping... Looping... ]]While chaining on the OnSomeEventMethod() wasn't really a used feature, I still wanted to keep it just incase someone was relying on this working. And it does have it uses
-
All Multi Objects now use Connection objects
multiobj:OnSomeEvent(func)ormultiobj.OnSomeEvent(func) -
Connection Objects no longer Fire with syntax sugar when attached to an object:
multiobj:OnSomeEvent(...)No longer triggers the Fire event. As part of the update to make all objects use connections internally this little used feature had to be scrapped! -
multi:newTStep now derives it's functionality from multi:newStep (Cut's down on code length a bit)
Removed:
-
multi:getTasksDetails()Remade completely and now calledmulti:getStats() -
multi:getError()Removed when setting protect was removed -
multi:FreeMainEvent()The new changes with connections make's this function unnecessary -
multi:OnMainConnect(func)See above -
multi:connectFinal(func)See above -
multi:lightloop()Cleaned up the mainloop/uManager method, actually faster than lightloop (Which should have been called liteloop) -
multi:threadloop()See above for reasons -
multi setting: protectThis added extra complexity to the mainloop and not much benefit. If you feel a function will error use pcall yourself. This saves a decent amount of cycles, about 6.25% increase in performance. -
multi:GetParentProcess()usemulti.getCurrentProcess()instead -
priority scheme 2, 3 and auto-priority have been removed! Only priority scheme 1 actually performed in a reasonable fashion so that one remained.
-
multi:newFunction(func)thread:newFunction(func)Has many more features and replaces what multi:newFunction did
-
multi.holdFor()Now that multi.hold takes the option table that thread.hold has this feature can be emulated using that. -
Calling Fire on a connection no longer returns anything! Now that internal features use connections, I noticed how slow connections are and have increased their speed quite a bit. From 50,000 Steps per seconds to almost 7 Million. All other features should work just fine. Only returning values has been removed
Fixed:
-
Issue with Lanes crashing the lua state. Issue seemed to be related to my filesystem, since remounting the drive caused the issue to stop. (Windows)
-
Issue where System threaded functions not being up to date with threaded functions
-
Issue where gettasksdetails() would try to process a destroyed object causing it to crash
-
Issue with multi.hold() not pumping the mainloop and only the scheduler
ToDo:
- Work on network parallelism
Downloads
-
multi v15.1.x Stable
released this
2021-11-30 21:28:18 -05:00 | 152 commits to master since this releaseUpdate 15.1.0 - Hold the thread!
Full Update Showcase
package.path = "./?/init.lua;"..package.path multi,thread = require("multi"):init() func = thread:newFunction(function(count) local a = 0 while true do a = a + 1 thread.sleep(.1) thread.pushStatus(a,count) if a == count then break end end return "Done" end) multi:newThread("Function Status Test",function() local ret = func(10) local ret2 = func(15) local ret3 = func(20) ret.OnStatus(function(part,whole) print("Ret1: ",math.ceil((part/whole)*1000)/10 .."%") end) ret2.OnStatus(function(part,whole) print("Ret2: ",math.ceil((part/whole)*1000)/10 .."%") end) ret3.OnStatus(function(part,whole) print("Ret3: ",math.ceil((part/whole)*1000)/10 .."%") end) -- Connections can now be added together, if you had multiple holds and one finished before others and wasn't consumed it would lock forever! This is now fixed thread.hold(ret2.OnReturn + ret.OnReturn + ret3.OnReturn) print("Function Done!") os.exit() end) test = thread:newFunction(function() return 1,2,nil,3,4,5,6,7,8,9 end,true) print(test()) multi:newThread("testing",function() print("#Test = ",test()) print(thread.hold(function() print("Hello!") return false end,{ interval = 2, cycles = 3 })) -- End result, 3 attempts within 6 seconds. If still false then timeout print("held") end).OnError(function(...) print(...) end) sandbox = multi:newProcessor() sandbox:newTLoop(function() print("testing...") end,1) test2 = multi:newTLoop(function() print("testing2...") end,1) sandbox:newThread("Test Thread",function() local a = 0 while true do thread.sleep(1) a = a + 1 print("Thread Test: ".. multi.getCurrentProcess().Name) if a == 10 then sandbox.Stop() end end end).OnError(function(...) print(...) end) multi:newThread("Test Thread",function() while true do thread.sleep(1) print("Thread Test: ".. multi.getCurrentProcess().Name) end end).OnError(function(...) print(...) end) sandbox.Start() multi:mainloop()Added:
multi:newSystemThreadedJobQueue(n) isEmpty()
- returns true if the queue is empty, false if there are items in the queue.
Note: a queue might be empty, but the job may still be running and not finished yet! Also if a registered function is called directly instead of pushed, it will not reflect inside the queue until the next cycle!
Example:
package.path="?.lua;?/init.lua;?.lua;?/?/init.lua;"..package.path package.cpath = [[C:\Program Files (x86)\Lua\5.1\systree\lib\lua\5.1\?.dll;C:\Program Files (x86)\Lua\5.1\systree\lib\lua\5.1\?\core.dll;]] ..package.cpath multi,thread = require("multi"):init() GLOBAL,THREAD = require("multi.integration.threading"):init() -- Auto detects your enviroment and uses what's available jq = multi:newSystemThreadedJobQueue(5) -- Job queue with 4 worker threads func = jq:newFunction("test",function(a,b) THREAD.sleep(2) return a+b end) for i = 1,10 do func(i,i*3).connect(function(data) print(data) end) end local a = true b = false multi:newThread("Standard Thread 1",function() while true do thread.sleep(.1) print("Empty:",jq:isEmpty()) end end).OnError(function(self,msg) print(msg) end) multi:mainloop()multi.TIMEOUT
multi.TIMEOUTis equal to "TIMEOUT", it is reccomended to use this incase things change later on. There are plans to change the timeout value to become a custom object instead of a string.new connections on threaded functions
-
func.OnStatus(...)Allows you to connect to the status of a function see thread.pushStatus()
-
func.OnReturn(...)Allows you to connect to the functions return event and capture its returns see Example for an example of it in use.
multi:newProcessor(name)
package.path = "./?/init.lua;"..package.path multi,thread = require("multi"):init() -- Create a processor object, it works a lot like the multi object sandbox = multi:newProcessor() -- On our processor object create a TLoop that prints "testing..." every second sandbox:newTLoop(function() print("testing...") end,1) -- Create a thread on the processor object sandbox:newThread("Test Thread",function() -- Create a counter named 'a' local a = 0 -- Start of the while loop that ends when a = 10 while true do -- pause execution of the thread for 1 second thread.sleep(1) -- increment a by 1 a = a + 1 -- display the name of the current process print("Thread Test: ".. multi.getCurrentProcess().Name) if a == 10 then -- Stopping the processor stops all objects created inside that process including threads. In the backend threads use a regular multiobject to handle the scheduler and all of the holding functions. These all stop when a processor is stopped. This can be really useful to sandbox processes that might need to turned on and off with ease and not having to think about it. sandbox.Stop() end end -- Catch any errors that may come up end).OnError(function(...) print(...) end) sandbox.Start() -- Start the process multi:mainloop() -- The main loop that allows all processes to continueNote: Processor objects have been added and removed many times in the past, but will remain with this update.
Attribute Type Returns Description Start Method() self Starts the process Stop Method() self Stops the process OnError Connection connection Allows connection to the process error handler Type Member: string"process" Contains the type of object Active Member: booleanvariable If false the process is not active Name Member: stringvariable The name set at process creation process Thread thread A handle to a multi thread object Note: All tasks/threads created on a process are linked to that process. If a process is stopped all tasks/threads will be halted until the process is started back up.
Connection can now be added together
Very useful when using thread.hold for multiple connections to trigger.
Iif you had multiple holds and one finished before others and wasn't consumed it would lock forever! This is now fixed
print(conn + conn2 + conn3 + connN)Can be chained as long as you want! See example below
Status added to threaded functions
-
thread.pushStatus(...)Allows a developer to push a status from a function.
-
tFunc.OnStatus(func(...))A connection that can be used on a function to view the status of the threaded function
Example:
package.path = "./?/init.lua;"..package.path multi,thread = require("multi"):init() func = thread:newFunction(function(count) local a = 0 while true do a = a + 1 thread.sleep(.1) thread.pushStatus(a,count) if a == count then break end end return "Done" end) multi:newThread("Function Status Test",function() local ret = func(10) local ret2 = func(15) local ret3 = func(20) ret.OnStatus(function(part,whole) --[[ Print out the current status. In this case every second it will update with: 10% 20% 30% ... 100% Function Done! ]] print(math.ceil((part/whole)*1000)/10 .."%") end) ret2.OnStatus(function(part,whole) print("Ret2: ",math.ceil((part/whole)*1000)/10 .."%") end) ret3.OnStatus(function(part,whole) print("Ret3: ",math.ceil((part/whole)*1000)/10 .."%") end) -- Connections can now be added together, if you had multiple holds and one finished before others and wasn't consumed it would lock forever! This is now fixed thread.hold(ret2.OnReturn + ret.OnReturn + ret3.OnReturn) print("Function Done!") os.exit() end)Changed:
-
f = thread:newFunction(func,holdme)- Nothing changed that will affect how the object functions by default. The returned function is now a table that is callable and 3 new methods have been added:
Method Description Pause() Pauses the function, Will cause the function to return nil, Function is pausedResume() Resumes the function holdMe(set) Sets the holdme argument that existed at function creation package.path = "./?/init.lua;"..package.path multi, thread = require("multi"):init() test = thread:newFunction(function(a,b) thread.sleep(1) return a,b end, true) print(test(1,2)) test:Pause() print(test(1,2)) test:Resume() print(test(1,2)) --[[ -- If you left holdme nil/false print(test(1,2).connect(function(...) print(...) end)) test:Pause() print(test(1,2).connect(function(...) print(...) end)) test:Resume() print(test(1,2).connect(function(...) print(...) end)) ]] multi:mainloop()Output:
1 2 nil Function is paused 1 2If holdme is nil/false:
nil Function is paused 1 2 nil... 1 2 nil... -
thread.hold(n,opt) Ref. Issue
-
Added option table to thread.hold
Option Description interval Time between each poll cycles Number of cycles before timing out sleep Number of seconds before timing out skip Number of cycles before testing again, does not cause a timeout! Note: cycles and sleep options cannot both be used at the same time. Interval and skip cannot be used at the same time either. Cycles take priority over sleep if both are present! HoldFor and HoldWithin can be emulated using the new features. Old functions will remain for backward compatibility.
Using cycles, sleep or interval will cause a timeout; returning nil, multi.TIMEOUT
-
ncan be a number and thread.hold will act like thread.sleep. Whennis a number the option table will be ignored!
-
Removed:
- N/A
Fixed:
- Threaded functions not returning multiple values Ref. Issue
- Priority Lists not containing Very_High and Very_Low from previous update
- All functions that should have chaining now do, reminder all functions that don't return any data return a reference to itself to allow chaining of method calls.
ToDo
- Work on network parallelism (I really want to make this, but time and getting it right is proving much more difficult)
- Work on QOL changes to allow cleaner code like this
Downloads
-
multi v15.0.x Stable
released this
2021-04-30 10:48:58 -04:00 | 183 commits to master since this releaseUpdate 15.0.0 - The art of faking it
Full Update Showcase
package.path="?.lua;?/init.lua;?.lua;?/?/init.lua;"..package.path multi,thread = require("multi"):init() GLOBAL,THREAD = require("multi.integration.threading"):init() -- Auto detects your enviroment and uses what's available jq = multi:newSystemThreadedJobQueue(4) -- Job queue with 4 worker threads func = jq:newFunction("test",function(a,b) THREAD.sleep(2) return a+b end) for i = 1,10 do func(i,i*3).connect(function(data) print(data) end) end multi:newThread("Standard Thread 1",function() while true do thread.sleep(1) print("Testing 1 ...") end end) multi:newISOThread("ISO Thread 2",{test=true},function() while true do thread.sleep(1) print("Testing 2 ...") end end) multi:mainloop()Note:
This was supposed to be released over a year ago, but work and other things got in my way. Pesudo Threading now works. The goal of this is so you can write modules that can be scaled up to utilize threading features when available.
Added:
- multi:newISOThread(name,func,env)
- Creates an isolated thread that prevents both locals and globals from being accessed.
- Was designed for the pesudoManager so it can emulate threads. You can use it as a super sandbox, but remember upvalues are also stripped which was intened for what I wanted them to do!
- Added new integration: pesudoManager, functions just like lanesManager and loveManager, but it's actually single threaded
- This was implemented because, you may want to build your code around being multi threaded, but some systems/implemetations of lua may not permit this. Since we now have a "single threaded" implementation of multi threading. We can actually create scalable code where things automatcally are threaded if built correctly. I am planning on adding more threadedOjbects.
- In addition to adding pesudo Threading
multi.integration.threadingcan now be used to autodetect which enviroment you are on and use the threading features.
If you are using love2d it will use that, if you have lanes avaialble then it will use lanes. Otherwise it will use pesudo threading. This allows module creators to implement scalable features without having to worry about which enviroment they are in. Can now require a consistant module:GLOBAL,THREAD = require("multi.integration.threading"):init()require("multi.integration.threading"):init()
Changed:
- Documentation to reflect the changes made
Removed:
- CBT (Coroutine Based threading) has lost a feature, one that hasn't been used much, but broke compatiblity with anything above lua 5.1. My goal is to make my library work with all versions of lua above 5.1, including 5.4. Lua 5.2+ changed how enviroments worked which means that you can no longer modify an enviroment of function without using the debug library. This isn't ideal for how things in my library worked, but it is what it is. The feature lost is the one that converted all functions within a threaded enviroment into a threadedfunction. This in hindsight wasn't the best pratice and if it is the desired state you as the user can manually do that anyway. This shouldn't affect anyones code in a massive way.
Fixed:
- pseudoThreading and threads had an issue where they weren't executing properly
- lanesManager THREAD:get(STRING: name) not returning the value
Todo:
- Add more details to the documentation
Downloads
- multi:newISOThread(name,func,env)
-
V14.2.x Stable
released this
2020-03-14 09:12:00 -04:00 | 0 commits to v14.2.0 since this releaseAdded:
- Type: destroyed
- A special state of an object that causes that object to become immutable and callable. The object Type is always "destroyed" it cannot be changed. The object can be indexed to infinity without issue. Every part of the object can be called as if it were a function including the indexed parts. This is done incase you destroy an object and still use it somewhere. However, if you are expecting something from the object then you may still encounter an error, though the returned type is an instance of the destroyed object which can be indexed and called like normal. This object can be used in any way and no errors will come about with it.
Fixed:
- thread.holdFor(n,func) and thread.holdWithin(n,func) now accept a connection object as the func argument
- Issue with threaded functions not handling nil properly from returns. This has been resolved and works as expected.
- Issue with system threaded job queues newFunction() not allowing nil returns! This has be addressed and is no longer an issue.
- Issue with hold like functions not being able to return
false - Issue with connections not returning a handle for managing a specific conn object.
- Issue with connections where connection chaining wasn't working properly. This has been addressed.
package.path="?.lua;?/init.lua;?.lua;?/?/init.lua;"..package.path local multi,thread = require("multi"):init() test = multi:newConnection() test(function(hmm) print("hi",hmm.t) hmm.t = 2 end)(function(hmm) print("hi2",hmm.t) hmm.t = 3 end)(function(hmm) print("hi3",hmm.t) end) test:Fire({t=1})
Changed:
- Destroying an object converts the object into a 'destroyed' type.
- connections now have type 'connector_link'
OnExample = multi:newConnection() -- Type Connector, Im debating if I should change this name to multi:newConnector() and have connections to it have type connection conn = OnExample(...) print(conn.Type) -- connector_link
Removed: (Cleaning up a lot of old features)
- Removed multi:newProcessor(STRING: file) — Old feature that is not really needed anymore. Create your multi-objs on the multi object or use a thread
- bin dependency from the rockspec
- Example folder and .html variants of the .md files
- multi:newTrigger() — Connections do everything this thing could do and more.
- multi:newHyperThreadedProcess(name)*
- multi:newThreadedProcess(name)*
- multi.nextStep(func)* — The new job System can be used instead to achieve this
- multi.queuefinal(self) — An Old method for a feature long gone from the library
- multi:setLoad(n)*
- multi:setThrestimed(n)*
- multi:setDomainName(name)*
- multi:linkDomain(name)*
- multi:_Pause()* — Use multi:Stop() instead!
- multi:isHeld()/multi:IsHeld()* Holding is handled differently so a held variable is no longer needed for chacking.
- multi.executeFunction(name,...)*
- multi:getError()* — Errors are nolonger gotten like that, multi.OnError(func) is the way to go
- multi.startFPSMonitior()*
- multi.doFPS(s)*
*Many features have become outdated/redundant with new features and additions that have been added to the library
Downloads
- Type: destroyed
-
V14.0.0 Stable
released this
2020-01-26 10:06:23 -05:00 | 3 commits to v14.0.0 since this releaseWhile you can still use luarocks to handle installing the library, This provides a easy copy and paste way to get the files needed for love2d and other environments.
Refer to changes.md for what's new in this release
Downloads
- Source Code (ZIP)
- Source Code (TAR.GZ)
-
changes.html
180 KiB
-
changes.md
72 KiB
-
multi-v14.0.0.zip
40 KiB
-
1.8.5 Update Stable
released this
2017-06-28 22:55:49 -04:00 | 389 commits to master since this releaseWill include highlights for all 1.8.x changes
Updated integrations
Added new features for threads
Fixed multi thread error management
Look at changes in ReadMe for more infoDownloads
- Source Code (ZIP)
- Source Code (TAR.GZ)
-
examples.1.8.0.zip
27 MiB
-
examples.1.8.2.zip
27 MiB
-
examples.1.8.4.zip
1.5 MiB
-
examples.1.8.5.zip
1.5 MiB
-
multi.1.8.0.zip
31 KiB
-
multi.1.8.2.zip
62 KiB
-
multi.1.8.4.zip
68 KiB
-
multi.1.8.5.zip
72 KiB