Nice work. I’ve been frustrated with how closed off location history tools have become lately. This looks like a solid step toward giving people real ownership of their data again. Definitely checking this out.
This is great. I want this but for much more. I want it to also be a nextcloud and zotero replacement, storing all my documents and books and documenting when I added, opened, edited them. I want it to store all notes that I write. I want it to record and display all browser tabs I open, when I do so, everything I copy and paste, every key I press. I want a record of everything I do in the digital world that is searchable and that can answer the question: "what was I working on 2 weeks ago on this day?" and bring back all the context also.
For obvious reasons this has to be self hosted and managed. I'm not interested in creating surveillance software or technology.
It sounds extreme but whenever I have seen peoples obsidian set ups with heaps of manual and bidirectional linking I always thought that time is the one thing we should look at. If I look up some concept on wikipedia today, there is a higher chance of me looking up related concepts or working on something related to that around this time also.
> I want it to also be a nextcloud and zotero replacement, storing all my documents and books and documenting when I added, opened, edited them. I want it to store all notes that I write.
Sounds in-scope so far. Long-term, perhaps, and maybe optional add-on features rather than built-in, but we'll see.
> I want it to record and display all browser tabs I open, when I do so, everything I copy and paste, every key I press.
That is possible in theory, but for me personally that's just too detailed. :D I wouldn't need all that granularity, myself.
But hey, the vision is pretty similar. We are generating all sorts of data to document and understand our lives -- we don't even have to deliberately write a journal -- but we have no way of comprehending it. This app is an attempt to solve that.
Thanks! Yes, I agree. Someone already implemented a Firefox history data source; I don't think it includes the _content_ of the pages, but that could be interesting.
This looks really cool and like something I've been subconsciously looking for!
A couple thoughts & ideas:
- Given the sensitivity of the data, I would be rather scared to self-host this, unless it's a machine at home, behind a Wireguard/Tailscale setup. I would love to see this as an E2E-encrypted application, similarly to Ente.io.
- Could index and storage backend be decoupled, so that I can host my photos etc. elsewhere and, in particular, prevent data duplication? (For instance, if you already self-host Immich or Ente.io and you also set up backups, it'd be a waste to have Timelinize store a separate copy of the photos IMO.) I know, this is not entirely trivial to achieve but for viewing & interacting with different types of data there are already tons of specialized applications out there. Timelinized can't possibly replace all of them.
The problem with any other model, AFAIK, is that someone else has access to your data unless I implement an encrypted live database, like with homomorphic encryption, but even then, I'm sure at some places it would have to be decrypted in memory in places (like, transcoding videos or encoding images, for starters), and the physical owner of the machine will always have access to that.
I just don't think any other way of doing it is really feasible to truly preserve your privacy. I am likely wrong, but if so, I also imagine it's very tedious, nuanced, error-prone, and restrictive.
(Or maybe I'm just totally wrong!)
> - Could index and storage backend be decoupled, so that I can host my photos etc. elsewhere and, in particular, prevent data duplication?
I know this is contentious for some, but part of the point is to duplicate/copy your data into the timeline. It acts as a backup, and it ensures consistency, reliability, and availability.
Apps like PhotoStructure do what you describe -- and do a good job of indexing external content. I just think that's going to be hard to compel in Timelinize.
Agreed! I played with Signal exports for a while but the format changed enough that it was difficult to rely on this as a data source. Especially since it's not just obvious what changes, it's encryption so it's kind of a black box.
That said, anyone is welcome to contribute more data sources. I will even have an import API at some point, so the data sources don't have to be compiled in. Other scripts or programs could push data to Timelinize.
Just to reiterate, one of the main goals of Timelinize is to have your data. It may mean some duplication, but I'm OK with that. Storage is getting cheap enough, and even if it's expensive, it's worth it.
> I just don't think any other way of doing it is really feasible to truly preserve your privacy. I am likely wrong, but if so, I also imagine it's very tedious, nuanced, error-prone, and restrictive.
It's certainly not easy but I wouldn't go as far as saying it requires homomorphic encryption. Have you had a look at what the Ente.io people do? Even though everything is E2E-encrypted, they have (purely local) facial recognition, which to me sounds an order of magnitude harder (compute-intensive) than building a chronological index/timeline. But maybe I'm missing something here, which isn't unlikely, given that I'm not the person who just spent a decade building this very cool tool.
> It acts as a backup, and it ensures consistency, reliability, and availability.
Hmmm, according to you[0],
> Timelinize is an archival tool, not a backup utility. Please back up your timeline(s) with a proper backup tool.
;)
I get your point, though, especially when it comes to reliability & availability. Maybe the deduplication needs to happen at a different level, e.g. at the level of the file system (ZFS etc.) or at least at the level of backups (i.e. have restic/borgbackup deduplicate identical files in the backed-up data).
Then again, I can't say I have not had wet dreams once or twice of a future where apps & their persistent data simply refer to user files through their content hashes, instead of hard-coding paths & URLs. (Prime example: Why don't m3u playlist files use hashes to become resistant against file renamings? Every music player already indexes all music files, anyway. Sigh.)
> Especially since it's not just obvious what changes, it's encryption so it's kind of a black box.
Wouldn't you rather diff the data after decrypting the archive?
> Just to reiterate, one of the main goals of Timelinize is to have your data. It may mean some duplication, but I'm OK with that. Storage is getting cheap enough, and even if it's expensive, it's worth it.
I suspect it will lead to duplication of pretty much all user data (i.e. original storage requirements × 2), at least if you're serious about your timeline. However, I see your point, it might very well be a tradeoff that's worth it.
This is an amazing idea but do I have to run Google takeout every time I want to update the data[0]? Unfortunately that's such a a cumbersome process that I don't think I'd use this. But if my timeline could update in near real time this would be a killer app
Yeah. Major thorn in my side. I spent hours trying to automate that process by using Chrome headless, and it kinda worked, until I realized that I needed to physically authenticate not just once, but every 10 minutes. So, it basically can't be automated since 2FA is needed so often.
In practice, I do a Takeout once or twice a year. (I recommend this even if not using Timelinize, so you can be sure to have your data.)
I thought you could set up an automatic Takeout export periodically, and choose the target to be your Google Drive. Then via a webapp oauth you could pull the data that way. Frequency was limited (looks like it says the auto export is “every 2 months for 1 year”). So hardly realtime, but seems useful and (relatively) easy? Does a method like that not work for your intentions?
Some kind of companion app that runs on my phone and streams the latest data (photos, location history, texts, etc ) back to the timeline would probably be more tractable for live updates. But that is probably a wildly different scope than the import based workflow. This is very cool regardless.
About 5-6 years ago, Timelinize actually used only the Google Photos API. It didn't even support imports from Takeout yet. The problem is the API strips photos of crucial metadata including location, and gives you nerfed versions of your data. Plus the rate limits were so unbearable, I eventually ripped this out.
But yeah, an app that runs on your phone would be a nice QoL improvement.
Probably not hard. Timelinize's data sources have a standard API with just 2 methods [0], so it should be fairly trivial to implement depend on how accessible Immich is.
To clarify, you don't grant access to Google Photos, you just do the Takeout from https://takeout.google.com to download your data first.
A few latin names have been suggested, but nothing has stuck. The problem is they are usually difficult to spell and pronounce, which isn't really an improvement over the current situation :)
FindPenguins is cool! I don't use it myself, but anyone is welcome to implement a data source for it.
Hey - this is awesome. I've been working on a small local app like this to import financial data and present a dashboard, for the family to use together (wife and I). So yeah - great work here, taking control of your data.
I'm curious about real-time data, or cron jobs, though. I love the idea of importing my data into this, but it would be nicer if I could set it up to automatically poll for new data somehow. Does Timelineize do something like that? I didn't see on the page.
Cool, yeah, the finance use case seems very relevant. Someday it'd be cool to have a Finance exploration page, like we do for other kinds of data.
Real-time/polling imports aren't yet supported, but that's not too difficult once we land on the right design for that feature.
I tinkered with a "drop zone" where you could designate a folder that, when you add files to it, Timelinize immediately imports it (then deletes the file from the drop zone).
As for branding, IMO you could go a bunch of directions:
Timelines
Tempor (temporal)
Chronos
Chronografik
Continuum
Momentum (moments, memory, momentum through time)
IdioSync (kinda hate this one tbh)
Who knows! Those are just the ones that fell out of my mouth while typing. It's just gotta have a memorable and easy-to-pronounce cadence. Even "Memorable" is a possibility LOL
Like others I really like the idea and the realisation looks great too!
I might not be the typical user for this, because I'd prefer my data to actually stay in the cloud where it is, but I'd still like to have it indexed and timelined. Can timelinize do this? Like, instead of downloading everything from gphoto, youtube, bluesky, wtv, just index what's there and offer the same interface? And only optionnaly download the actual data in addition to the meta-data?
That's not really aligned with my vision/goals, which is to bring my data home; but to be clear, downloading your data doesn't mean it has to leave the cloud. You can have your cake and eat it too.
The debate between importing he data and indexing external data is a long, grueling one, but ultimately I cannot be satisfied with not having my data guaranteed to be locally available.
I suppose in the future it's possible we could add the ability to index external data, but this likely wouldn't work well in practice since most data sources lock down real-time access via their API restrictions.
A live demo would be great, but I'm not sure how to generate the fake data in a way that imitates real data patterns. That's originally how I wanted to demo things, but the results weren't compelling. (It was a half-hearted effort, I admit.) So I switched to obfuscating real data.
FreeMyCash looks great! Yours is the second financial application I've heard of; maybe we need to look at adding some finance features soon.
Wow that's great! Interesting if it's possible to use not just a folder but like a s3-compatible backend for photos and for db backups as well
(I don't think all my photo/video archives would fit on my laptop, though the thumbnails definitely would, while minio or something replicated between my desktop plus a backup machine at Hetzner or something would definitely do the thing)
I don't think sqlite runs very well on S3 file systems. I think it would also be insufferably slow.
I even encountered crashes within sqlite when using ExFAT -- so file system choice is definitely important! (I've since implemented a workaround for this bug by detecting exfat and configuring sqlite to avoid WAL mode, so it's just... much slower.)
I really like the local storage of this. Files and folders are the best!
(When noodling on this, I’ve also been wondering about putting metadata for files in sidecar files next to the files they describe, rather than a centralized SQLite database. Did you experiment with anything like that by any chance?)
Why sidecar metadata files? In general I've tried to minimize the number of files on disk since that makes copying slow and error-prone. (A future version of Timelinize will likely support storing ALL the data in a DB to make for faster, easier copying.) We'd still need a DB for the index anyway, which essentially becomes a copy of the metadata.
Nice one, thanks for sharing. For sure I’ll give it a try.
Have you thought of creating a setup so as to package all libraries and dependencies needed? You have a very nice installation guide, but there are many users who just want the setup.exe :-)
Thank you! Not sure I can package everything because of license requirements. IANAL. I think the container image basically automates the various dependencies, but I didn't create it and I don't use it so I'm not 100% sure.
Basically, I would love to know how to do this correctly for each platform, but I don't know how. Help would be welcomed.
Cool idea. Thanks for sharing. I was really annoyed by the way Google nerfed the maps timeline stuff last year. Obviously this project is way more ambitious than that, but just goes to show you how little Google cares about the longevity of your data.
Very nice project! Curiousity question: since you are taking data dumps once-twice a year, and let's say you also copy the photos as well, do you do any updates incrementally or just replace the old one with new dump?
Interesting; how easy is it to backup it up somewhere - yes, on a cloud for example - and then restore/sync it on another machine? Is the data format portable and easy to move like this?
Yep -- a timeline is just a folder with regular files and folders in it. They're portable across OSes. I've tried to account for differences in case-sensitive and -insensitive file systems as well. So you can copy/move them and back them up like you would any other directory.
I had this same idea for a long time. Even took github.com/center for it (I've since changed how it's being used). Cool to see someone actually achieve it, well done.
Amazing project. In the era of AI I can see the software like this being used daily.
Nice work. I’ve been frustrated with how closed off location history tools have become lately. This looks like a solid step toward giving people real ownership of their data again. Definitely checking this out.
This is great. I want this but for much more. I want it to also be a nextcloud and zotero replacement, storing all my documents and books and documenting when I added, opened, edited them. I want it to store all notes that I write. I want it to record and display all browser tabs I open, when I do so, everything I copy and paste, every key I press. I want a record of everything I do in the digital world that is searchable and that can answer the question: "what was I working on 2 weeks ago on this day?" and bring back all the context also.
For obvious reasons this has to be self hosted and managed. I'm not interested in creating surveillance software or technology.
It sounds extreme but whenever I have seen peoples obsidian set ups with heaps of manual and bidirectional linking I always thought that time is the one thing we should look at. If I look up some concept on wikipedia today, there is a higher chance of me looking up related concepts or working on something related to that around this time also.
I think Microsoft has some kind of product that can help you Recall what you were working on?
> I want it to also be a nextcloud and zotero replacement, storing all my documents and books and documenting when I added, opened, edited them. I want it to store all notes that I write.
Sounds in-scope so far. Long-term, perhaps, and maybe optional add-on features rather than built-in, but we'll see.
> I want it to record and display all browser tabs I open, when I do so, everything I copy and paste, every key I press.
That is possible in theory, but for me personally that's just too detailed. :D I wouldn't need all that granularity, myself.
But hey, the vision is pretty similar. We are generating all sorts of data to document and understand our lives -- we don't even have to deliberately write a journal -- but we have no way of comprehending it. This app is an attempt to solve that.
Timelinize looks rad. Congratulations.
> That is possible in theory, but for me personally that's just too detailed. :D I wouldn't need all that granularity, myself.
Think this can go quite far with just the browsing history & content of viewed webpages.
Thanks! Yes, I agree. Someone already implemented a Firefox history data source; I don't think it includes the _content_ of the pages, but that could be interesting.
This looks really cool and like something I've been subconsciously looking for!
A couple thoughts & ideas:
- Given the sensitivity of the data, I would be rather scared to self-host this, unless it's a machine at home, behind a Wireguard/Tailscale setup. I would love to see this as an E2E-encrypted application, similarly to Ente.io.
- Could index and storage backend be decoupled, so that I can host my photos etc. elsewhere and, in particular, prevent data duplication? (For instance, if you already self-host Immich or Ente.io and you also set up backups, it'd be a waste to have Timelinize store a separate copy of the photos IMO.) I know, this is not entirely trivial to achieve but for viewing & interacting with different types of data there are already tons of specialized applications out there. Timelinized can't possibly replace all of them.
- Support for importing Polarsteps trips, and for importing Signal backups (e.g. via https://github.com/bepaald/signalbackup-tools ) would be nice!
Great comment, thanks for the questions.
> unless it's a machine at home,
This is, in fact, the intended model.
The problem with any other model, AFAIK, is that someone else has access to your data unless I implement an encrypted live database, like with homomorphic encryption, but even then, I'm sure at some places it would have to be decrypted in memory in places (like, transcoding videos or encoding images, for starters), and the physical owner of the machine will always have access to that.
I just don't think any other way of doing it is really feasible to truly preserve your privacy. I am likely wrong, but if so, I also imagine it's very tedious, nuanced, error-prone, and restrictive.
(Or maybe I'm just totally wrong!)
> - Could index and storage backend be decoupled, so that I can host my photos etc. elsewhere and, in particular, prevent data duplication?
I know this is contentious for some, but part of the point is to duplicate/copy your data into the timeline. It acts as a backup, and it ensures consistency, reliability, and availability.
Apps like PhotoStructure do what you describe -- and do a good job of indexing external content. I just think that's going to be hard to compel in Timelinize.
> Support for importing Polarsteps trips, and for importing Signal backups (e.g. via https://github.com/bepaald/signalbackup-tools ) would be nice!
Agreed! I played with Signal exports for a while but the format changed enough that it was difficult to rely on this as a data source. Especially since it's not just obvious what changes, it's encryption so it's kind of a black box.
That said, anyone is welcome to contribute more data sources. I will even have an import API at some point, so the data sources don't have to be compiled in. Other scripts or programs could push data to Timelinize.
Just to reiterate, one of the main goals of Timelinize is to have your data. It may mean some duplication, but I'm OK with that. Storage is getting cheap enough, and even if it's expensive, it's worth it.
Thanks for your thoughtful response!
> I just don't think any other way of doing it is really feasible to truly preserve your privacy. I am likely wrong, but if so, I also imagine it's very tedious, nuanced, error-prone, and restrictive.
It's certainly not easy but I wouldn't go as far as saying it requires homomorphic encryption. Have you had a look at what the Ente.io people do? Even though everything is E2E-encrypted, they have (purely local) facial recognition, which to me sounds an order of magnitude harder (compute-intensive) than building a chronological index/timeline. But maybe I'm missing something here, which isn't unlikely, given that I'm not the person who just spent a decade building this very cool tool.
> It acts as a backup, and it ensures consistency, reliability, and availability.
Hmmm, according to you[0],
> Timelinize is an archival tool, not a backup utility. Please back up your timeline(s) with a proper backup tool.
;)
I get your point, though, especially when it comes to reliability & availability. Maybe the deduplication needs to happen at a different level, e.g. at the level of the file system (ZFS etc.) or at least at the level of backups (i.e. have restic/borgbackup deduplicate identical files in the backed-up data).
Then again, I can't say I have not had wet dreams once or twice of a future where apps & their persistent data simply refer to user files through their content hashes, instead of hard-coding paths & URLs. (Prime example: Why don't m3u playlist files use hashes to become resistant against file renamings? Every music player already indexes all music files, anyway. Sigh.)
> Especially since it's not just obvious what changes, it's encryption so it's kind of a black box.
Wouldn't you rather diff the data after decrypting the archive?
> Just to reiterate, one of the main goals of Timelinize is to have your data. It may mean some duplication, but I'm OK with that. Storage is getting cheap enough, and even if it's expensive, it's worth it.
I suspect it will lead to duplication of pretty much all user data (i.e. original storage requirements × 2), at least if you're serious about your timeline. However, I see your point, it might very well be a tradeoff that's worth it.
[0]: https://timelinize.com/docs/importing-data
Oh yeah, mholt is notable for having created Caddy (the webserver). My interest in Timelineize just went up.
This is an amazing idea but do I have to run Google takeout every time I want to update the data[0]? Unfortunately that's such a a cumbersome process that I don't think I'd use this. But if my timeline could update in near real time this would be a killer app
[0]: https://timelinize.com/docs/data-sources/google-photos
Yeah. Major thorn in my side. I spent hours trying to automate that process by using Chrome headless, and it kinda worked, until I realized that I needed to physically authenticate not just once, but every 10 minutes. So, it basically can't be automated since 2FA is needed so often.
In practice, I do a Takeout once or twice a year. (I recommend this even if not using Timelinize, so you can be sure to have your data.)
I thought you could set up an automatic Takeout export periodically, and choose the target to be your Google Drive. Then via a webapp oauth you could pull the data that way. Frequency was limited (looks like it says the auto export is “every 2 months for 1 year”). So hardly realtime, but seems useful and (relatively) easy? Does a method like that not work for your intentions?
Will have to look into that. Sounds like it could be expensive but maybe worth it.
You can schedule the takeout to Drive, then use a tool such as rclone (amazing tool) to pull it down.
It should not add any costs except the storage for the takeout zip on drive.
Look at supported providers in rclone and you might find easy solutions for some hard sync problems: https://rclone.org/#providers
> except the storage for the takeout zip on drive.
Yeah, that's the cost I'm talking about. It essentially amounts to paying an extra subscription to be able to download your data [on a regular basis].
I'm a big rclone fan btw :) I'm sure there's some future where we do something like this to automate Takeouts.
Some kind of companion app that runs on my phone and streams the latest data (photos, location history, texts, etc ) back to the timeline would probably be more tractable for live updates. But that is probably a wildly different scope than the import based workflow. This is very cool regardless.
For sure.
About 5-6 years ago, Timelinize actually used only the Google Photos API. It didn't even support imports from Takeout yet. The problem is the API strips photos of crucial metadata including location, and gives you nerfed versions of your data. Plus the rate limits were so unbearable, I eventually ripped this out.
But yeah, an app that runs on your phone would be a nice QoL improvement.
Syncthing from phone to a directory on PC?
That's what I do. Though I don't then put them into any system. Yet.
I did this by creating my own small password manager.
How easy would it be to integrate this with immich (instead of needing the access to google photo)?
Probably not hard. Timelinize's data sources have a standard API with just 2 methods [0], so it should be fairly trivial to implement depend on how accessible Immich is.
To clarify, you don't grant access to Google Photos, you just do the Takeout from https://takeout.google.com to download your data first.
[0]: https://pkg.go.dev/github.com/timelinize/timelinize@v0.0.23/...
Nice project! If you don't like "timelinize" - have you looked at latin names? Perhaps something like Temperi?
In terms of Features, i'd like to see support for FindPenguins. A lot of interesting data (photos, videos, GPS coordinates, text) is already there.
A few latin names have been suggested, but nothing has stuck. The problem is they are usually difficult to spell and pronounce, which isn't really an improvement over the current situation :)
FindPenguins is cool! I don't use it myself, but anyone is welcome to implement a data source for it.
Hey - this is awesome. I've been working on a small local app like this to import financial data and present a dashboard, for the family to use together (wife and I). So yeah - great work here, taking control of your data.
I'm curious about real-time data, or cron jobs, though. I love the idea of importing my data into this, but it would be nicer if I could set it up to automatically poll for new data somehow. Does Timelineize do something like that? I didn't see on the page.
Cool, yeah, the finance use case seems very relevant. Someday it'd be cool to have a Finance exploration page, like we do for other kinds of data.
Real-time/polling imports aren't yet supported, but that's not too difficult once we land on the right design for that feature.
I tinkered with a "drop zone" where you could designate a folder that, when you add files to it, Timelinize immediately imports it (then deletes the file from the drop zone).
But putting imports on a timer would be trivial.
As for branding, IMO you could go a bunch of directions:
Timelines
Tempor (temporal)
Chronos
Chronografik
Continuum
Momentum (moments, memory, momentum through time)
IdioSync (kinda hate this one tbh)
Who knows! Those are just the ones that fell out of my mouth while typing. It's just gotta have a memorable and easy-to-pronounce cadence. Even "Memorable" is a possibility LOL
-suggestions from some dude, not ChatGPT
Dateline (with the Dateline NBC theme song playing quietly in the background while you browse your history and achievements)
Momenta
Like others I really like the idea and the realisation looks great too!
I might not be the typical user for this, because I'd prefer my data to actually stay in the cloud where it is, but I'd still like to have it indexed and timelined. Can timelinize do this? Like, instead of downloading everything from gphoto, youtube, bluesky, wtv, just index what's there and offer the same interface? And only optionnaly download the actual data in addition to the meta-data?
That's not really aligned with my vision/goals, which is to bring my data home; but to be clear, downloading your data doesn't mean it has to leave the cloud. You can have your cake and eat it too.
The debate between importing he data and indexing external data is a long, grueling one, but ultimately I cannot be satisfied with not having my data guaranteed to be locally available.
I suppose in the future it's possible we could add the ability to index external data, but this likely wouldn't work well in practice since most data sources lock down real-time access via their API restrictions.
Love the grind! One suggestion would be to add a demo link with some test data so we can see it in action.
I am also slowly "offlining" my life. Currently, it is a mix of synology, hard drives and all.
I have always thought about building a little dashboard to access everything really. Build a financial dashboard[1] and now onto photos.
[1] https://github.com/neberej/freemycash/
A live demo would be great, but I'm not sure how to generate the fake data in a way that imitates real data patterns. That's originally how I wanted to demo things, but the results weren't compelling. (It was a half-hearted effort, I admit.) So I switched to obfuscating real data.
FreeMyCash looks great! Yours is the second financial application I've heard of; maybe we need to look at adding some finance features soon.
Wow that's great! Interesting if it's possible to use not just a folder but like a s3-compatible backend for photos and for db backups as well
(I don't think all my photo/video archives would fit on my laptop, though the thumbnails definitely would, while minio or something replicated between my desktop plus a backup machine at Hetzner or something would definitely do the thing)
I don't think sqlite runs very well on S3 file systems. I think it would also be insufferably slow.
I even encountered crashes within sqlite when using ExFAT -- so file system choice is definitely important! (I've since implemented a workaround for this bug by detecting exfat and configuring sqlite to avoid WAL mode, so it's just... much slower.)
Definitely not sqlite-on-s3! Just for the photos and videos, and the periodic db backups
I see... that might make it hard to keep all the data together, which is one of the goals. But I will give it some thought.
I really like the local storage of this. Files and folders are the best!
(When noodling on this, I’ve also been wondering about putting metadata for files in sidecar files next to the files they describe, rather than a centralized SQLite database. Did you experiment with anything like that by any chance?)
Why sidecar metadata files? In general I've tried to minimize the number of files on disk since that makes copying slow and error-prone. (A future version of Timelinize will likely support storing ALL the data in a DB to make for faster, easier copying.) We'd still need a DB for the index anyway, which essentially becomes a copy of the metadata.
Nice one, thanks for sharing. For sure I’ll give it a try.
Have you thought of creating a setup so as to package all libraries and dependencies needed? You have a very nice installation guide, but there are many users who just want the setup.exe :-)
Thank you! Not sure I can package everything because of license requirements. IANAL. I think the container image basically automates the various dependencies, but I didn't create it and I don't use it so I'm not 100% sure.
Basically, I would love to know how to do this correctly for each platform, but I don't know how. Help would be welcomed.
Yeah I totally want this. How much data are we talking about on average?
Cool idea. Thanks for sharing. I was really annoyed by the way Google nerfed the maps timeline stuff last year. Obviously this project is way more ambitious than that, but just goes to show you how little Google cares about the longevity of your data.
Very cool! I have a sketchy pipeline for exporting my data from Gmaps to my personal site and always thought about building something like this.
This could be a really interesting as a digital forensics thing.
Sounds really cool. I’ve been wanting something like this. Kudos for building it!
I don’t see the link to the rep on on first glance of the linked site, so linking it here: https://github.com/timelinize/timelinize
Very nice project! Curiousity question: since you are taking data dumps once-twice a year, and let's say you also copy the photos as well, do you do any updates incrementally or just replace the old one with new dump?
Timelinize doesn't import duplicates by default, so you can just import the whole new Takeout and it will only keep what is new.
But you have control over this:
- You can customize what makes an item a duplicate or unique
- You can choose whether to update existing items, and how to update them (what to keep of the incoming item versus the existing one)
Interesting; how easy is it to backup it up somewhere - yes, on a cloud for example - and then restore/sync it on another machine? Is the data format portable and easy to move like this?
Yep -- a timeline is just a folder with regular files and folders in it. They're portable across OSes. I've tried to account for differences in case-sensitive and -insensitive file systems as well. So you can copy/move them and back them up like you would any other directory.
I had this same idea for a long time. Even took github.com/center for it (I've since changed how it's being used). Cool to see someone actually achieve it, well done.
For a name, how about 'Rain Barrel' ? Your own personal cloud
Beautiful app. Surprised to see JQuery for your frontend; brings back good old memories.
Ha, thanks it’s actually AJQuery, just a two line shim to gain the $ sugar. Otherwise vanilla JS.
I've been basically doing this for years via a private mastodon instance. Very nice to see!
That's great, I've been running a timeline of my life in excel, I wonder if this could replace it.
I've always wanted this but not enough to build it. I wonder if I can integrate this with my Monica instance. Thank you! I'm going to try it.
Will be curious how you use it. I plan to integrate local LLM at some point but it’s still nebulous in my head.