pleroma issueshttps://git.pleroma.social/pleroma/pleroma/-/issues2022-12-31T06:53:46Zhttps://git.pleroma.social/pleroma/pleroma/-/issues/2928Support extended types2022-12-31T06:53:46ZIljaSupport extended typesWe read in https://www.w3.org/TR/activitystreams-core/#object
> When an implementation uses an extension type that overlaps with a core vocabulary type, the implementation MUST also specify the core vocabulary type.
The example given i...We read in https://www.w3.org/TR/activitystreams-core/#object
> When an implementation uses an extension type that overlaps with a core vocabulary type, the implementation MUST also specify the core vocabulary type.
The example given is
```json
{
"@context": [
"https://www.w3.org/ns/activitystreams",
{
"gr": "http://purl.org/goodrelations/v1#"
}
],
"type": ["Place", "gr:Location"],
"name": "Sally's Restaurant",
"longitude": 12.34,
"latitude": 56.78,
"gr:category": "restaurants/french_restaurants"
}
```
The idea is if you don't support `["Place", "gr:Location"]`, you should still process the object as `Place`. AFAIK we don't have implementations like this in the wild yet, but this would partly solve the problem of having to implement an increasing number of custom objects. Having a server with support for this could make it more attractive for people to use extensions of core types like this, rather than just creating new ones.https://git.pleroma.social/pleroma/pleroma/-/issues/2927Unauthenticated access options for local / remote statuses are broken2023-08-30T01:12:33ZLady GagaUnauthenticated access options for local / remote statuses are brokenIn admin-fe you have options that disable unauthenticated access for local and remote statuses. I dont think there's anything wrong with the admin-fe option, I think it's a frontend bug -- but if you "disallow access" to the statuses, th...In admin-fe you have options that disable unauthenticated access for local and remote statuses. I dont think there's anything wrong with the admin-fe option, I think it's a frontend bug -- but if you "disallow access" to the statuses, the frontend doesn't respect that and will display the local statuses and remote repeats anyway, although replies to remote statuses will be broken links as expected.
This can be problematic for those that would like to, say, show their statuses but disallow people seeing remote posts
Confirmed in develop branch.
Edit: Backend bug *https://git.pleroma.social/pleroma/pleroma/-/issues/2926Don't allow Announces of statuses of banned users2023-05-08T01:01:18ZtusooaDon't allow Announces of statuses of banned userssomeone repeated a status of a banned user, and there's such thing on my timeline
![image](/uploads/0502ac1de8d16b4f97346362fb78f050/image.png)someone repeated a status of a banned user, and there's such thing on my timeline
![image](/uploads/0502ac1de8d16b4f97346362fb78f050/image.png)https://git.pleroma.social/pleroma/pleroma/-/issues/2925Support running multi-node2023-05-08T01:01:24ZtusooaSupport running multi-nodePart of https://git.pleroma.social/pleroma/pleroma/-/issues/798
- [x] libcluster: automatically discover and connect to nodes
- [ ] Cachex: *does not support changing num of nodes on the fly*, available substitutions:
- [Nebulex](http...Part of https://git.pleroma.social/pleroma/pleroma/-/issues/798
- [x] libcluster: automatically discover and connect to nodes
- [ ] Cachex: *does not support changing num of nodes on the fly*, available substitutions:
- [Nebulex](https://hexdocs.pm/nebulex): an existing solution, supports distributed cache/partitioning, requires wrapping to match Cachex api
- cache replication, manually: https://github.com/whitfin/cachex/issues/219
- redis
- [ ] shoutbox:
- [ ] should just work (tm/mc) with Phoenix.Channel (see https://github.com/poeticoding/phoenix_chat_example )
- [ ] messages does not propagate if a new node joins later
- [ ] Config/Emoji/...: need to propagate from and to nodes (we can do Phoenix.PubSub )
- [ ] Endpoint: should just work - how to determine load balancing heuristics?
- [ ] Oban: ?
- [ ] Streamer: need to broadcast to all nodes (mastodon uses redis to push statuses to streamer services, we can use pubsub too)
- more?https://git.pleroma.social/pleroma/pleroma/-/issues/2924Refactor: Make sure uploaders are fully modular2022-08-27T09:08:27ZIljaRefactor: Make sure uploaders are fully modularUploaders are modular. Ideally you should be able to write a module, drop it in the code somewhere, and it should be able to run without extra changes.
In https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3654 I noticed that s...Uploaders are modular. Ideally you should be able to write a module, drop it in the code somewhere, and it should be able to run without extra changes.
In https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3654 I noticed that some stuff leaks out in `lib/pleroma/upload.ex` (more specifically `base_url/0`, but check for others as well).
We may also want to check if we want to make the `description` part of the modules, similar like how is done for MRF.
Once https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3654 is merged, we should probably fix that up.https://git.pleroma.social/pleroma/pleroma/-/issues/2923Occasionally failing to retrieve CI base image2023-05-08T01:03:31ZSean KingOccasionally failing to retrieve CI base image### Bug description
Occasionally, I happen to notice the CI base will fail to be retrieved (ie in the analysis stage) causing that CI stage to fail. See log from this job: https://git.pleroma.social/pleroma/pleroma/-/jobs/217816### Bug description
Occasionally, I happen to notice the CI base will fail to be retrieved (ie in the analysis stage) causing that CI stage to fail. See log from this job: https://git.pleroma.social/pleroma/pleroma/-/jobs/217816https://git.pleroma.social/pleroma/pleroma/-/issues/2922Show warnings causing config to unable to be migrated to database2023-05-08T01:03:48ZSean KingShow warnings causing config to unable to be migrated to databaseWhen I was going through the process of migrating the configuration to the database for my development environment, it failed because of config deprecation warnings. However, the only message it gave me was: "Migration is not allowed unt...When I was going through the process of migrating the configuration to the database for my development environment, it failed because of config deprecation warnings. However, the only message it gave me was: "Migration is not allowed until all deprecation warnings have been resolved."
This confused me and I had to run `mix phx.server` to figure out it was. Thankfully, it was just because I was using `Exiftool` instead of `Exiftool.StripLocation` for the upload filter and I was able to correct that. Though, it might be easier in the future to be able to debug this if the deprecation warnings causing the configuration to be unable to migrated to the DB are listed when said task fails.https://git.pleroma.social/pleroma/pleroma/-/issues/2921Split static dir into several dirs by purpose2023-05-08T01:04:11ZHJSplit static dir into several dirs by purposei.e. separate the general-purpose static dir that admins can use to host custom resources and move out stuff that pleroma manages itself, most importantly - emoji. Also frontends and whatever other stuff might be.i.e. separate the general-purpose static dir that admins can use to host custom resources and move out stuff that pleroma manages itself, most importantly - emoji. Also frontends and whatever other stuff might be.https://git.pleroma.social/pleroma/pleroma/-/issues/2920Frontend Installation returning 400 - Could not download or unzip the frontend2023-05-08T00:57:35ZFikran Mutasa'ilFrontend Installation returning 400 - Could not download or unzip the frontend<!--
### Precheck
* For support use https://git.pleroma.social/pleroma/pleroma-support or [community channels](https://git.pleroma.social/pleroma/pleroma#community-channels).
* Please do a quick search to ensure no similar bug has been ...<!--
### Precheck
* For support use https://git.pleroma.social/pleroma/pleroma-support or [community channels](https://git.pleroma.social/pleroma/pleroma#community-channels).
* Please do a quick search to ensure no similar bug has been reported before. If the bug has not been addressed after 2 weeks, it's fine to bump it.
* Try to ensure that the bug is actually related to the Pleroma backend. For example, if a bug happens in Pleroma-FE but not in Mastodon-FE or mobile clients, it's likely that the bug should be filed in [Pleroma-FE](https://git.pleroma.social/pleroma/pleroma-fe/issues/new) repository.
-->
### Environment
* Installation type: Source
* Pleroma version: 2.4.3
* Frontend version: b13d8f7e
* Elixir version : Elixir 1.10.3 (compiled with Erlang/OTP 22)
* Operating system: Debian 11
* PostgreSQL version (`psql -V`): psql (PostgreSQL) 13.7 (Debian 13.7-0+deb11u1)
### Bug description
When I try to install an alternate Frontend from the Admin web interface, I receive the following error:
"Request failed with status code 400 - Could not download or unzip the frontend"
Note, this only happens for fedi-fe, kenoma, and soapbox-fe. admin-fe and pleroma-fe seem to work.
I followed the installation guide, so if this a permissions issue the issue resides in the installation process.https://git.pleroma.social/pleroma/pleroma/-/issues/2918List timelines improvements2023-05-08T01:03:20ZHJList timelines improvements* Mastodon [has](https://docs.joinmastodon.org/methods/timelines/lists/) a property `replies_policy` that seems to be working similarly to our timeline filter - to include replies to follows (default on mastodon), replies to people in th...* Mastodon [has](https://docs.joinmastodon.org/methods/timelines/lists/) a property `replies_policy` that seems to be working similarly to our timeline filter - to include replies to follows (default on mastodon), replies to people in the list, no replies at all. We can add `all` to replicate current behavior.
* Need to check, but I think our timeline filters don't apply to list timelines
* Add "include self" option, to replicate home timeline experience, this is currently my main usecase for a list - to have a reduced, comfier version of home timeline with only close frens in it, but the fact that i myself am not included in it makes it a bit awkward.
* Need to check if it's possible to add myself there
* Add metadata field - let user upload an "avatar" for list, set description etc. Mostly for UI purposes - currently if you want to pin list to navigation it will just use first letter as an icon due to lack of better options.https://git.pleroma.social/pleroma/pleroma/-/issues/2917IPFS media storage for pleroma (free S3 alternative)2023-05-08T01:05:19ZCEO of FediverseIPFS media storage for pleroma (free S3 alternative)IPFS is a decentralized cloud CDN that works peer to peer. https://ipfs.tech/
Pleroma can use IPFS as a free alternative to Amazon S3.
What can be stored in IPFS: media attachments, user avatars, emoji (maybe)
I'd create a merge reque...IPFS is a decentralized cloud CDN that works peer to peer. https://ipfs.tech/
Pleroma can use IPFS as a free alternative to Amazon S3.
What can be stored in IPFS: media attachments, user avatars, emoji (maybe)
I'd create a merge request here, but can't. There is a working PoC code here: https://dev.federated.media/FM/pleroma/pulls/1https://git.pleroma.social/pleroma/pleroma/-/issues/2915Recursively fetch all messages in a conversation2023-05-08T01:06:03ZSkyler HawthorneRecursively fetch all messages in a conversationCurrently, if the backend receives a new post from a follower, but fails to fetch other messages in a thread from remote servers, these other messages never make it into the DB. It will not retry or queue up fetches it needs to do.
An e...Currently, if the backend receives a new post from a follower, but fails to fetch other messages in a thread from remote servers, these other messages never make it into the DB. It will not retry or queue up fetches it needs to do.
An example can be seen on my own instance.
https://pleroma.dead10ck.com/notice/AMDNYR5vLOk0HTAqS8
It looks as if this user just made a solitary post, but if you go to the source server,
https://mastodon.social/@Gargron/108769773321577551
From the logs, we can see that it timed out while fetching the other posts:
```
Aug 05 10:24:11 dead10ck.com pleroma[180018]: 10:24:11.255 [error] Error while fetching https://one.darnell.one/users/darnell/statuses/108769046062132141: {:error, :timeout}
Aug 05 10:24:11 dead10ck.com pleroma[180018]: 10:24:11.255 [warn] Couldn't fetch "https://one.darnell.one/users/darnell/statuses/108769046062132141", error: nil
```
So the real issue seems to be that the backend never tries to backfill when it encounters an error during the recursive fetch. It just gives up forever.
I was thinking, since you can search for these individual posts and get them added to your DB that way, it might make sense to recursively fetch all messages in a thread basically any time you open a message in a conversation. It probably wouldn't be that expensive since you only have to fetch from the DB first to see if it's there before doing requests to the remote server.
Or alternatively and perhaps more complicated, a kind of "job system" could be introduced that queues up these fetches, and in the event of failure, the job just stays persisted until it's succeeded or crossed some timeout thresholdhttps://git.pleroma.social/pleroma/pleroma/-/issues/2914Update api-spec to also mention privileges2023-05-08T01:06:35ZIljaUpdate api-spec to also mention privilegesSee https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3676
The spec can be found in lib/pleroma/web/api_spec/
To build it locally
1. in pleroma folder `mix pleroma.openapi_spec spec.json`
2. copy `spec.json` to root of api-do...See https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3676
The spec can be found in lib/pleroma/web/api_spec/
To build it locally
1. in pleroma folder `mix pleroma.openapi_spec spec.json`
2. copy `spec.json` to root of api-docs folder
3. in api-docs folder:
```sh
npm install
npm run dev
python3 -m http.server 1337
```
Then visit at <http://0.0.0.0:1337/>https://git.pleroma.social/pleroma/pleroma/-/issues/2913[Tracker] Elixir / Erlang OTP compatibility2023-06-27T00:38:41ZHaelwenn[Tracker] Elixir / Erlang OTP compatibility# Erlang
## Erlang OTP 25
- [ ] `:slave.start/3` warning → #2897
- [x] `websocket_client` uses `:http_uri.parse` → https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3649
## Erlang OTP 26
- [ ] `ecto` will need https://github.c...# Erlang
## Erlang OTP 25
- [ ] `:slave.start/3` warning → #2897
- [x] `websocket_client` uses `:http_uri.parse` → https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3649
## Erlang OTP 26
- [ ] `ecto` will need https://github.com/elixir-ecto/ecto/commit/23a55a260853f1315362323d1de241a5c22ced9b
# Elixir
## Elixir 1.14
https://github.com/elixir-lang/elixir/releases/tag/v1.14.0-rc.0 - 2022-08-01
- [x] `prometheus_ex` uses `Kernel.Utils.defdelegate/2` → <https://github.com/deadtrickster/prometheus.ex/pull/47>
```
1) test it logs error when script is not found (Pleroma.Web.MediaProxy.Invalidation.ScriptTest)
test/pleroma/web/media_proxy/invalidation/script_test.exs:11
Assertion with == failed
code: assert Invalidation.Script.purge(["http://example.com/media/example.jpg"], script_path: "./example") ==
{:error, "%ErlangError{original: :enoent}"}
left: {:error, "%ErlangError{original: :enoent, reason: nil}"}
right: {:error, "%ErlangError{original: :enoent}"}
stacktrace:
(ex_unit 1.14.0-rc.0) lib/ex_unit/capture_log.ex:105: ExUnit.CaptureLog.with_log/2
(ex_unit 1.14.0-rc.0) lib/ex_unit/capture_log.ex:74: ExUnit.CaptureLog.capture_log/2
test/pleroma/web/media_proxy/invalidation/script_test.exs:12: (test)
```https://git.pleroma.social/pleroma/pleroma/-/issues/2911rich media parse failure sometimes just dumps a data structure of proxy options2023-05-08T01:07:30Z0riginrich media parse failure sometimes just dumps a data structure of proxy options### Environment
* Installation type (OTP or From Source): from source
* Pleroma version (could be found in the "Version" tab of settings in Pleroma-FE): 2.4.3
* Elixir version (`elixir -v` for from source installations, N/A for OTP):
``...### Environment
* Installation type (OTP or From Source): from source
* Pleroma version (could be found in the "Version" tab of settings in Pleroma-FE): 2.4.3
* Elixir version (`elixir -v` for from source installations, N/A for OTP):
```
Erlang/OTP 23 [erts-11.1.8] [source] [64-bit] [smp:2:2] [ds:2:2:10] [async-threads:1]
Elixir 1.10.3 (compiled with Erlang/OTP 22)
```
* Operating system: Debian 11
* PostgreSQL version (`psql -V`): psql (PostgreSQL) 13.7 (Debian 13.7-0+deb11u1)
### Bug description
Getting some errors in the log like follows:
`[warn] Rich media error for <removed>: {:options, {:socket_options, [socks5_transport: :hackney_ssl, socks5_resolve: :undefined, socks5_pass: :undefined, socks5_user: :undefined, socks5_port: 1080, socks5_host: :localhost, insecure: false, ssl_options: [versions: [:"tlsv1.2", :"tlsv1.1", :tlsv1]], packet_size: 0, packet: 0, header: 0, active: false, mode: :binary]}}`
This appears to just be a dump of options similar to those set in the config for `:http` and while it's likely relevant to the issue (some rich media fetches failing over a forward proxy), it doesn't provide a lot of insight as it's impossible to tell _what_ exactly is breaking or why, without being at least somewhat skilled at reading Elixir/Erlang.
Apologies if this is too trivial an issue, but I'm trying to debug forward proxy problems and am running into this. No experience with the language or framework so I don't know yet how to modify/fix it myself, so in part this is also a note to self.https://git.pleroma.social/pleroma/pleroma/-/issues/2910IPFS ActivityPub support2023-05-08T01:07:44ZHaelwennIPFS ActivityPub support`2022-07-31 21:25 <@lanodan>` For IPFS I feel like the current uploader[1] would work for develop but I think for a release we should add actual ipfs support into ActivityPub quite like how Peertube also gives torrent URLs, this way othe...`2022-07-31 21:25 <@lanodan>` For IPFS I feel like the current uploader[1] would work for develop but I think for a release we should add actual ipfs support into ActivityPub quite like how Peertube also gives torrent URLs, this way other instances can use other ipfs gateways. At least for me it's weird to only give an ipfs gateway URL.
1: https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3654HaelwennHaelwennhttps://git.pleroma.social/pleroma/pleroma/-/issues/2909Remove media when corresponding post(s) is deleted2023-05-08T01:05:07ZIljaRemove media when corresponding post(s) is deletedIt seems media isn't deleted when a post expires. A quick check tells me the same is true for posts deleted through the web interface.
When a post is deleted, I expect the media to be deleted as well (unless it's shared with another pos...It seems media isn't deleted when a post expires. A quick check tells me the same is true for posts deleted through the web interface.
When a post is deleted, I expect the media to be deleted as well (unless it's shared with another post ofc).
This may or may not be implemented together with https://git.pleroma.social/pleroma/pleroma/-/merge_requests/3677
Other info I found myself I can add here:
We store info on media in the objects table
```sql
select * from public.objects o
where o.data ->> 'type' in ('Document')
order by inserted_at
```
I have a bot that posts music files every hour, so I want to clean that up a bit. Posts expire after 7 days, but media remains. I look for "Document" objects that don't have a corresponding post any more with the following query. Maybe this can be a start for a better feature.
```sql
select
'rm ' || regexp_replace(replace(attachment.value ->> 'href', 'https://ilja.space/media/', ''), '/.*','') || '/*' "command"
from public.objects o
, jsonb_array_elements(o.data -> 'url') attachment
where o.data ->> 'type' in ('Document')
and o.data ->> 'actor' = 'https://ilja.space/users/cc_music_bot'
and attachment.value ->> 'href' not in (
select url ->> 'href' url from public.activities a
join public.objects o on o."data" ->> 'id' = a.data ->> 'object',
jsonb_array_elements(o.data -> 'attachment') attachment,
jsonb_array_elements(attachment -> 'url') url
where a.data ->> 'actor' = 'https://ilja.space/users/cc_music_bot'
and o.data ->> 'type' = 'Note'
)
order by inserted_at
```https://git.pleroma.social/pleroma/pleroma/-/issues/2905Exploring better long-term solutions for frontends issues2023-05-08T01:07:51ZSean KingExploring better long-term solutions for frontends issuesAs seen from the "notice compatibility routes" spiel, we just need to explore better long-term solutions for the issues frontends experience. Hardcoding routes is clearly not the way to go. And although not a big deal, Nginx solutions ar...As seen from the "notice compatibility routes" spiel, we just need to explore better long-term solutions for the issues frontends experience. Hardcoding routes is clearly not the way to go. And although not a big deal, Nginx solutions are merely short-term solutions. The reason I had the other issue open was because Pleroma devs expressed interest in having something before 2.5. That being said, there could be something better and we just need to explore it.
Again, abide by community guidelines please. See pleroma-meta!2https://git.pleroma.social/pleroma/pleroma/-/issues/2904MRF SimplePolicy: Allows to reject deletes regardless of host being already b...2023-05-08T01:08:21ZHaelwennMRF SimplePolicy: Allows to reject deletes regardless of host being already being globaly rejected or notIntention that seems to be in <https://git.pleroma.social/pleroma/pleroma/-/merge_requests/2371> was to just not process deletions in some specific cases from already rejected hosts.
I think it either should do it according to that inte...Intention that seems to be in <https://git.pleroma.social/pleroma/pleroma/-/merge_requests/2371> was to just not process deletions in some specific cases from already rejected hosts.
I think it either should do it according to that intention or maybe just always let Deletes pass through.
That said it might still be useful to reject deletes from spam machines.
(Should also be noted that various existing servers do not process Delete activities)https://git.pleroma.social/pleroma/pleroma/-/issues/2903Broken Accept-header handling to fetch ActivityPub data2023-05-08T01:09:04ZHaelwennBroken Accept-header handling to fetch ActivityPub dataPost by p@FSE detailing it: https://freespeechextremist.com/objects/66c80e12-8828-4e8f-8e83-0b606019190c
Relevant part copied:
## Correct behavior
* ``Accept: application/json`` returns JSON
* ``Accept: application/ld+json`` returns JS...Post by p@FSE detailing it: https://freespeechextremist.com/objects/66c80e12-8828-4e8f-8e83-0b606019190c
Relevant part copied:
## Correct behavior
* ``Accept: application/json`` returns JSON
* ``Accept: application/ld+json`` returns JSON
* ``Accept: application/activity+json`` returns JSON
* ``Accept: application/activity+json;q=0.9, text/html;q=0.1`` returns JSON
* ``Accept: application/activity+json;q=0.1, text/html;q=0.9`` results in an attempt to load the webapp
## Incorrect behavior
* ``Accept: application/json, */*`` results in an attempt to load the webapp
* ``Accept: application/activity+json;q=0.9, */*;q=0.1`` results in an attempt to load the webapp
* ``Accept: application/*`` results in a 302
Wouldn't be surprised that the relevant code to fix is the Phoenix Framework or one of it's dependencies